Bug 970960 - [RFE] Improve handling of failure of a disk, raid array or raid controller
Summary: [RFE] Improve handling of failure of a disk, raid array or raid controller
Status: CLOSED DUPLICATE of bug 852578
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs   
(Show other bugs)
Version: 2.1
Hardware: All Linux
Target Milestone: ---
: ---
Assignee: Niels de Vos
QA Contact: Sachidananda Urs
Keywords: FutureFeature, Patch, Triaged
Depends On: 971774
TreeView+ depends on / blocked
Reported: 2013-06-05 10:24 UTC by Christian Horn
Modified: 2018-12-03 19:01 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-07-11 09:54:48 UTC
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 391573 None None None Never

Description Christian Horn 2013-06-05 10:24:55 UTC
1. Proposed title of this feature request
   Improve handling of failure of a disk, raid array or raid controller 

3. What is the nature and description of the request?
   Currently, failure of disk, raid array or raid controller does lead to
   writes to the XFS filesystem failing.  Currently this occurs then:
   - I/O errors on the client
   - the brick with failed disks stays online in 'gluster volume status' 
     while in fact it is no longer is available
   - The node does NOT fence itself or do anything else to recover as the 
     gluster layer is unaware of the failed XFS filesystem

   This RFE requests to improve this behavior.
   The brick with failed disks should drop out of the gluster infrastructure.

4. Why does the customer need this? (List the business requirements here)
   This will improve reliability of the gluster setup,
   gluster should be notified when I/O errors to the XFS filesystem 
   of the bricks occur.

5. How would the customer like to achieve this? (List the functional requirements here)
   To be discussed, but it seems sane to get the affected brick marked
   as failed.

6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
   I/O errors on the XFS brick (i.e. when harddisk fails) should be
   handled better, i.e. 'gluster volume status' should reflect it.

7. Is there already an existing RFE upstream or in Red Hat Bugzilla?

8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?

9. Is the sales team involved in this request and do they have any additional input?

10. List any affected packages or components.

11. Would the customer be able to assist in testing this functionality if implemented?

Comment 2 Niels de Vos 2013-06-05 16:26:26 UTC
A basic health check of the underlying filesystem should be sufficient.

After some tests, it seems that a stat() returns -EIO in case of common disk

I have a simple implementation based on timers for a health-check and will do
some tests and share the results.

Comment 5 Niels de Vos 2013-07-01 15:10:24 UTC
Bug 971774 has a test-script as attachment 767432 [details].

Note You need to log in before you can comment on or make changes to this bug.