Bug 962266 - core: volume start fails on latest build
core: volume start fails on latest build
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
x86_64 Linux
high Severity urgent
: ---
: ---
Assigned To: Krutika Dhananjay
Saurabh
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-12 23:49 EDT by Saurabh
Modified: 2016-01-19 01:11 EST (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.8rhs-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:35:31 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-05-12 23:49:15 EDT
Description of problem:

gluster volume start fails on latest build
[root@bigbend1 ~]# gluster peer status
Number of Peers: 4

Hostname: 10.70.37.121
Uuid: cd19b8d0-bb97-42a8-9860-6cbc4bda1601
State: Peer in Cluster (Connected)

Hostname: 10.70.37.211
Uuid: baed4623-ca4c-4028-8674-f5e0c285e367
State: Peer in Cluster (Connected)

Hostname: 10.70.37.100
Uuid: ccdb3e04-5889-46dc-ab35-6e624d9ef85c
State: Peer in Cluster (Connected)

Hostname: 10.70.37.155
Uuid: 15316a44-a8df-4c4d-97a6-346a51d267d2
State: Peer in Cluster (Connected)
[root@bigbend1 ~]# gluster volume info
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 5912c35e-ebff-4d5d-9258-a5426b14a6e3
Status: Stopped
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.155:/rhs/brick1/d1r1
Brick2: 10.70.37.100:/rhs/brick1/d1r2
Brick3: 10.70.37.121:/rhs/brick1/d2r1
Brick4: 10.70.37.211:/rhs/brick1/d2r2
Brick5: 10.70.37.155:/rhs/brick1/d3r1
Brick6: 10.70.37.100:/rhs/brick1/d3r2
Brick7: 10.70.37.121:/rhs/brick1/d4r1
Brick8: 10.70.37.211:/rhs/brick1/d4r2
Brick9: 10.70.37.155:/rhs/brick1/d5r1
Brick10: 10.70.37.100:/rhs/brick1/d5r2
Brick11: 10.70.37.121:/rhs/brick1/d6r1
Brick12: 10.70.37.211:/rhs/brick1/d6r2
Options Reconfigured:
server.root-squash: enable



Version-Release number of selected component (if applicable):
glusterfs-3.4.0.6rhs-1.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. volume stop and glusterd stop on all nodes
2. remove rpm of the older build
3. install the latest rpms, glusterd start on all nodes of the cluster
3. gluster volume start <volume-name>
  
Actual results:
[root@bigbend1 ~]# gluster volume start dist-rep
volume start: dist-rep: failed: Another transaction could be in progress. Please try again after sometime.
[root@bigbend1 ~]# gluster volume info
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 5912c35e-ebff-4d5d-9258-a5426b14a6e3
Status: Stopped
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.155:/rhs/brick1/d1r1
Brick2: 10.70.37.100:/rhs/brick1/d1r2
Brick3: 10.70.37.121:/rhs/brick1/d2r1
Brick4: 10.70.37.211:/rhs/brick1/d2r2
Brick5: 10.70.37.155:/rhs/brick1/d3r1
Brick6: 10.70.37.100:/rhs/brick1/d3r2
Brick7: 10.70.37.121:/rhs/brick1/d4r1
Brick8: 10.70.37.211:/rhs/brick1/d4r2
Brick9: 10.70.37.155:/rhs/brick1/d5r1
Brick10: 10.70.37.100:/rhs/brick1/d5r2
Brick11: 10.70.37.121:/rhs/brick1/d6r1
Brick12: 10.70.37.211:/rhs/brick1/d6r2
Options Reconfigured:
server.root-squash: enable
[root@bigbend1 ~]# gluster volume status
Another transaction could be in progress. Please try again after sometime.

Interesting thing is this behaviour is not seen all nodes, 

on two nodes it does this, 

i.e. 
10.70.37.155 --- affected
10.70.37.121 --- affected
10.70.37.100 --- unaffected
10.70.37.211 --- unaffected
 


Expected results:
the gluster commands should show same result from all nodes

Additional info:
Comment 1 Saurabh 2013-05-12 23:54:31 EDT
sosreports:-
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/962266/
Comment 3 Amar Tumballi 2013-05-17 02:30:13 EDT
Looks same issue as described in mail to storage-qa (about RPM upgrade issue). Can you please verify with latest ISO + build, without any upgrade in picture?
Comment 5 Scott Haines 2013-09-23 18:35:31 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.