Bug 852556
| Summary: | Need information on retiring bricks/nodes | ||
|---|---|---|---|
| Product: | [Community] Gluster-Documentation | Reporter: | Shawn Heisey <redhat> |
| Component: | Other | Assignee: | Anjana Suparna Sriram <asriram> |
| Status: | ASSIGNED --- | QA Contact: | |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | unspecified | CC: | bugs, redhat |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Shawn Heisey
2012-08-28 22:42:52 UTC
When I did this procedure before, I did not test to see whether the migrated data was accessible. Today I tried a new test. This is probably going to require a new bug on gluster itself rather than the documentation, but I wanted to get the info written down while it's fresh. I loaded a small 4x2 volume two thirds full and tried to gracefully remove the last set of bricks with the procedure I have outlined above. All of the remaining bricks ran out of disk space during the migration, and there were thousands of migration failures in the log. I started the remove-brick back up, and it again ran out of disk space. A third time completed without migration errors in the log. At this point, I had not issued the commit. After this, I tried to access files in the volume from a client mount. Everything that originally existed on the removed bricks was inaccessible. Final status: Once I did the remove-brick commit, everything magically started working. I'm glad that there was no actual data loss, but if I am removing a set of 4TB bricks that's 75% or so full, it's going to take a really long time for 3TB of data (millions of files) to get migrated. The files that get migrated first will be unavailable for the entire time of the migration effort, which is unacceptable by any standard. |