Bug 1334262
| Summary: | [Tiering]: Handling of inconsistent state in case of timeout during tier attach | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sweta Anandpara <sanandpa> |
| Component: | tier | Assignee: | hari gowtham <hgowtham> |
| Status: | CLOSED WONTFIX | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.1 | CC: | bmohanra, nbalacha, rhinduja, rhs-bugs, rkavunga, sanandpa |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
If the gluster volume tier attach" command times out, it could result in either of two situations. Either the volume does not become a tiered volume, or the tier daemon is not started.
Workaround: When the timeout is observed, follow these steps:
Check if the volume has become a tiered volume.
If not, then rerun attach tier.
If it has, then proceed with the next step.
Check if the tier daemons were created on each server.
If the tier daemons were not created, then execute the following command:
gluster volume tier <volname> start
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-11-08 19:03:56 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1311843 | ||
|
Description
Sweta Anandpara
2016-05-09 09:48:45 UTC
Please ignore the SIGTERM messages. That happened with one of the bricks getting deleted, by mistake. Recovery steps: discussed and reviewed with glusterd engineering after recreating the problem. The hot tier would have been attached on either all nodes or none of them. On seeing a timeout: 1. Check if the graph has become a tiered volume. 1a. if not, rerun attach tier. 1b. if it has, goto step 2. 2. Check if the rebalance daemons were created on each server. 2b. If rebalance daemons were not created, run gluster tier <vol> start As tier is not being actively developed, I'm closing this bug. Feel free to open it if necessary. |