| Summary: | [Scale] Not Able To Start Volume (8-hosts, 8-bricks each host) | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Matt Mahoney <mmahoney> | ||||
| Component: | rhsc | Assignee: | Ramesh N <rnachimu> | ||||
| Status: | CLOSED WORKSFORME | QA Contact: | Sudhir D <sdharane> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 2.1 | CC: | dtsang, knarra, mmahoney, pprakash, rhs-bugs, ssampat | ||||
| Target Milestone: | --- | Keywords: | ZStream | ||||
| Target Release: | RHGS 2.1.2 | Flags: | mmahoney:
needinfo+
mmahoney: needinfo+ |
||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2013-12-03 12:07:34 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
|
Description
Matt Mahoney
2013-11-13 16:55:29 UTC
Created attachment 823547 [details]
Logs
Note: The 64-Host and 8-Volume configuration was an Import Cluster operation. Neither the Hosts or Volumes were added/created with the Console. We are not able to reproduce this error with the similar setup (64 Hosts and 8 Bricks per host and 8 Volumes with 64 bricks in each volume). we were able to start and stop the volumes many times without any issue. Still the issue is not reproducible with the given step. Always volume start/stop works without any issue. I can see from the log attached by Matt that most of the time gluster failed to start the volume with error 'Another transaction is in progress'. I think this is normal and volume start/stop should work after some time. Please verify once again with the same setup. We are not able to reproduce this error with the current setup. We may try to reproduce this Corbett setup. |