Bug 1030008 - [Scale] Not Able To Start Volume (8-hosts, 8-bricks each host)
[Scale] Not Able To Start Volume (8-hosts, 8-bricks each host)
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
Unspecified Unspecified
high Severity high
: ---
: RHGS 2.1.2
Assigned To: Ramesh N
Sudhir D
: ZStream
Depends On:
  Show dependency treegraph
Reported: 2013-11-13 11:55 EST by Matt Mahoney
Modified: 2015-05-13 12:34 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-12-03 07:07:34 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
mmahoney: needinfo+
mmahoney: needinfo+

Attachments (Terms of Use)
Logs (3.93 MB, application/x-gzip)
2013-11-13 11:58 EST, Matt Mahoney
no flags Details

  None (edit)
Description Matt Mahoney 2013-11-13 11:55:29 EST
Description of problem:
Attempting to start a volume that consists of 8-hosts and 8-bricks per host will not start.

Note: The configuration consists of a total of 8-volumes each with 8-hosts and 8-bricks per host.

Ovirt log message:
2013-11-13 11:36:55,195 INFO  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (ajp-/ [1fe610de] Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 2806cc3f-25e6-4bc5-a413-cb32930e7ef3 value: GLUSTER

Browser message:
Error while executing action: A Request to the Server failed with the following Status Code: 500

Version-Release number of selected component (if applicable):
Big Bend

How reproducible:

Steps to Reproduce:
1. Create 8 volumes with each volume consisting of 8-hosts with each host having 8-bricks
2. Start any volume

Actual results:
Volume does not start

Expected results:
Volume should start

Additional info:
Comment 2 Matt Mahoney 2013-11-13 11:58:59 EST
Created attachment 823547 [details]
Comment 3 Matt Mahoney 2013-11-15 09:33:44 EST
Note: The 64-Host and 8-Volume configuration was an Import Cluster operation. Neither the Hosts or Volumes were added/created with the Console.
Comment 4 Ramesh N 2013-11-25 05:16:50 EST
We are not able to reproduce this error with the similar setup (64 Hosts and 8 Bricks per host and 8 Volumes with 64 bricks in each volume).

  we were able to start and stop the volumes many times without any issue.
Comment 6 Ramesh N 2013-11-26 04:04:45 EST
Still the issue is not reproducible with the given step. Always volume start/stop
works without any issue. 

  I can see from the log attached by Matt that most of the time gluster failed
to start the volume with  error 'Another transaction is in progress'. I think this is normal and volume start/stop should work after some time. 

Please verify once again with the same setup.
Comment 8 Ramesh N 2013-12-03 03:19:59 EST
We are not able to reproduce this error with the current setup. We may try to 
reproduce this Corbett setup.

Note You need to log in before you can comment on or make changes to this bug.