| Summary: | volumes not coming back up after reboot | ||
|---|---|---|---|
| Product: | [Retired] GlusterSP | Reporter: | Amar Tumballi <amarts> |
| Component: | core | Assignee: | Balamurugan Arumugam <bala> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | low | ||
| Version: | unspecified | CC: | fharshav, platform, vraman, webteam |
| Target Milestone: | 3.0.2 | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Balamurugan Arumugam
2010-01-23 01:41:20 UTC
how to get this bug: i have 4 working volumes (on 4 nodes), all exporting nfs and cifs. now something happened, and my node1 hung.. i hard reboot node1. now after reboot node1 has only first volume up (export, nfs, cifs all working) but none of the other volumes are started.. these volume definitions are not even present in smb.conf all volumes in volume manager shows 'down' (even though all of them are running fine on other nodes).. i can't give login to this setup as its behind firewall on customer place, but i would request to have a look as quickly as possible on this. -Amar (In reply to comment #1) > Volume manager says running for a volume only if glusterfsd is running in all > selected servers, otherwise it shows as down. Thats fine. But My major concern was, why didn't the system come back to previous running state after reboot.. and why only one volume was started, and not any others. ?? It is very critical as there is no way to start a volume on only one node. Because if the volume doesn't come back up, even having mirror doesn't work fine (note that even the export volumes don't start). (In reply to comment #2) > (In reply to comment #1) > > Volume manager says running for a volume only if glusterfsd is running in all > > selected servers, otherwise it shows as down. > > Thats fine. But My major concern was, why didn't the system come back to > previous running state after reboot.. and why only one volume was started, and > not any others. ?? It is very critical as there is no way to start a volume on > only one node. Because if the volume doesn't come back up, even having mirror > doesn't work fine (note that even the export volumes don't start). This has been reproduced and tracked as known issue with 3.0.1, we will fix it with 3.0.2 update. Fixed, tested and working |