Bug 201932
Summary: | GFS io error on Gulm and Cman/dlm | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Pascal Pucci <pascal.pucci> |
Component: | gfs-kmod | Assignee: | Kiersten (Kerri) Anderson <kanderso> |
Status: | CLOSED NOTABUG | QA Contact: | GFS Bugs <gfs-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 5.0 | CC: | pjakobi |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2006-09-20 15:53:30 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Pascal Pucci
2006-08-09 20:40:49 UTC
What storage array are you using in this configuration? Are you able to do concurrent dd's from both nodes to the exported LUN's? Start with one LUN per node and validate that your storage is stable. Then combine the LUNs, either with cluster volume management. If your storage array is not able to handle concurrent writes/reads from both nodes at the same time, then the file system will not be able to get the data it needs to operate. Closing this as not a bug. Looks like problems with the underlying storage and with no further information available, it looks like the cluster software behaved correctly. Moving all RHCS ver 5 bugs to RHEL 5 so we can remove RHCS v5 which never existed. Moving all closed bugs to gfs-kmod to match the rpm name. GFS-kernel will be removed. |