1. Proposed title of this feature request
Improve handling of failure of a disk, raid array or raid controller
3. What is the nature and description of the request?
Currently, failure of disk, raid array or raid controller does lead to
writes to the XFS filesystem failing. Currently this occurs then:
- I/O errors on the client
- the brick with failed disks stays online in 'gluster volume status'
while in fact it is no longer is available
- The node does NOT fence itself or do anything else to recover as the
gluster layer is unaware of the failed XFS filesystem
This RFE requests to improve this behavior.
The brick with failed disks should drop out of the gluster infrastructure.
4. Why does the customer need this? (List the business requirements here)
This will improve reliability of the gluster setup,
gluster should be notified when I/O errors to the XFS filesystem
of the bricks occur.
5. How would the customer like to achieve this? (List the functional requirements here)
To be discussed, but it seems sane to get the affected brick marked
6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
I/O errors on the XFS brick (i.e. when harddisk fails) should be
handled better, i.e. 'gluster volume status' should reflect it.
7. Is there already an existing RFE upstream or in Red Hat Bugzilla?
8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?
9. Is the sales team involved in this request and do they have any additional input?
10. List any affected packages or components.
11. Would the customer be able to assist in testing this functionality if implemented?
A basic health check of the underlying filesystem should be sufficient.
After some tests, it seems that a stat() returns -EIO in case of common disk
I have a simple implementation based on timers for a health-check and will do
some tests and share the results.
Bug 971774 has a test-script as attachment 767432 [details].