Bug 1030008 - [Scale] Not Able To Start Volume (8-hosts, 8-bricks each host)
Summary: [Scale] Not Able To Start Volume (8-hosts, 8-bricks each host)
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 2.1.2
Assignee: Ramesh N
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-13 16:55 UTC by Matt Mahoney
Modified: 2015-05-13 16:34 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-12-03 12:07:34 UTC
Target Upstream Version:
mmahoney: needinfo+
mmahoney: needinfo+


Attachments (Terms of Use)
Logs (3.93 MB, application/x-gzip)
2013-11-13 16:58 UTC, Matt Mahoney
no flags Details

Description Matt Mahoney 2013-11-13 16:55:29 UTC
Description of problem:
Attempting to start a volume that consists of 8-hosts and 8-bricks per host will not start.

Note: The configuration consists of a total of 8-volumes each with 8-hosts and 8-bricks per host.

Ovirt log message:
2013-11-13 11:36:55,195 INFO  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (ajp-/127.0.0.1:8702-7) [1fe610de] Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 2806cc3f-25e6-4bc5-a413-cb32930e7ef3 value: GLUSTER


Browser message:
Error while executing action: A Request to the Server failed with the following Status Code: 500

Version-Release number of selected component (if applicable):
Big Bend

How reproducible:


Steps to Reproduce:
1. Create 8 volumes with each volume consisting of 8-hosts with each host having 8-bricks
2. Start any volume
3.

Actual results:
Volume does not start

Expected results:
Volume should start

Additional info:

Comment 2 Matt Mahoney 2013-11-13 16:58:59 UTC
Created attachment 823547 [details]
Logs

Comment 3 Matt Mahoney 2013-11-15 14:33:44 UTC
Note: The 64-Host and 8-Volume configuration was an Import Cluster operation. Neither the Hosts or Volumes were added/created with the Console.

Comment 4 Ramesh N 2013-11-25 10:16:50 UTC
We are not able to reproduce this error with the similar setup (64 Hosts and 8 Bricks per host and 8 Volumes with 64 bricks in each volume).

  we were able to start and stop the volumes many times without any issue.

Comment 6 Ramesh N 2013-11-26 09:04:45 UTC
Still the issue is not reproducible with the given step. Always volume start/stop
works without any issue. 

  I can see from the log attached by Matt that most of the time gluster failed
to start the volume with  error 'Another transaction is in progress'. I think this is normal and volume start/stop should work after some time. 

Please verify once again with the same setup.

Comment 8 Ramesh N 2013-12-03 08:19:59 UTC
We are not able to reproduce this error with the current setup. We may try to 
reproduce this Corbett setup.


Note You need to log in before you can comment on or make changes to this bug.