Bug 616504
Summary: | Problem with reboot/halt of a cluster node | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | rauch | ||||||
Component: | cman | Assignee: | Christine Caulfield <ccaulfie> | ||||||
Status: | CLOSED DUPLICATE | QA Contact: | Cluster QE <mspqa-list> | ||||||
Severity: | high | Docs Contact: | |||||||
Priority: | low | ||||||||
Version: | 5.5 | CC: | cluster-maint, edamato, hlawatschek | ||||||
Target Milestone: | rc | ||||||||
Target Release: | --- | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2011-01-26 14:57:52 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Created attachment 433202 [details]
cluster.conf
cluster.conf
*** This bug has been marked as a duplicate of bug 609075 *** |
Created attachment 433200 [details] messages Description of problem: It is not possible to reboot/halt a cluster node without a gfs filesystem freeze for several seconds (~30) and a fence of the node. Version-Release number of selected component (if applicable): RHEL 5.5 latest updates cman-2.0.115-34.el5_5.1 kernel-2.6.18-194.8.1.el5 openais-0.80.6-16.el5_5.2 kmod-gfs-0.1.34-12.el5 gfs-utils-0.1.20-7.el5 How reproducible: Instal RHEL 5.5 with latest updates and setup a cluster and reboot one node. Steps to Reproduce: 1. Setup a 2 node cluster with qdisk 2. Reboot a node 3. try read/write from/to the gfs filesystem Actual results: The cluster filesystem (gfs) freeze for several seconds (~30) and the node gets fenced Expected results: Stopping a cluster node should not result in a filesystem freeze and a fence of the node. Additional info: Attached cluster.conf and messages