Bug 1101588 - BVT: Volume top command for a wrong brick is causing cli to hang
Summary: BVT: Volume top command for a wrong brick is causing cli to hang
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.0.0
Assignee: Avra Sengupta
QA Contact: Lalatendu Mohanty
URL:
Whiteboard:
Depends On:
Blocks: 1102656
TreeView+ depends on / blocked
 
Reported: 2014-05-27 14:59 UTC by Lalatendu Mohanty
Modified: 2016-09-17 14:40 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.6.0.12-1.el6rhs
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1102656 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:39:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Lalatendu Mohanty 2014-05-27 14:59:37 UTC
Description of problem:

Volume top tests are failing in BVT test as cli is timing out for below command and for subsequent test are failing as cli returning error "Another transaction is in progress for test-vol. Please try again after sometime."

The test is a negative test where, following command s run for a wrong brick.

gluster volume top $volname open brick $IP:/tmp/brick

Version-Release number of selected component (if applicable):

glusterfs-3.6.0.8-1.el6rhs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create volume, start it
2. Mount it on client. Run IO
for i in `seq 1 100`; do dd if=/dev/zero of=/mnt/fuse/$i bs=128K count=100 2>&1 1>/dev/null; dd if=/mnt/fuse/$i of=/dev/zero bs=128K count=100 2>&1 1>/dev/null; done
3.  gluster volume top $volname open
4. Run volume top for a brick which is not part of the volume.
gluster volume top $volname open brick $IP:/tmp/brick
5. gluster volume top $volname read

Actual results:

$gluster volume top $volname read
Another transaction is in progress for test-vol. Please try again after sometime.
volume top unsuccessful


Expected results:

The command "gluster volume top $volname read" should work fine.

Additional info:

From /var/log/glusterfs/etc-glusterfs-glusterd.vol.log:

[2014-05-27 14:18:59.938060] I [glusterd-handler.c:1367:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2014-05-27 14:18:59.940369] I [glusterd-handler.c:1367:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2014-05-27 14:19:36.337073] I [glusterd-handler.c:2709:__glusterd_handle_cli_profile_volume] 0-management: Received volume profile req for volume test-vol
[2014-05-27 14:19:36.340450] E [glusterd-rpc-ops.c:1866:glusterd_brick_op] 0-management: Failed to select bricks while performing brick op during 'Volume Profile'
[2014-05-27 14:19:36.340507] E [glusterd-op-sm.c:6516:glusterd_op_sm] 0-management: handler returned: -1
[2014-05-27 14:19:36.340537] E [glusterd-op-sm.c:207:glusterd_get_txn_opinfo] 0-: Unable to get transaction opinfo for transaction ID : 00000000-0000-0000-0000-000000000000
[2014-05-27 14:19:36.340553] E [glusterd-op-sm.c:6498:glusterd_op_sm] 0-management: Unable to get transaction's opinfo
[2014-05-27 14:30:42.115923] I [glusterd-handler.c:2709:__glusterd_handle_cli_profile_volume] 0-management: Received volume profile req for volume test-vol
[2014-05-27 14:30:42.116073] W [glusterd-locks.c:547:glusterd_mgmt_v3_lock] 0-management: Lock for test-vol held by 79f50508-9dc4-4f20-906b-2c8b5978ac31
[2014-05-27 14:30:42.116113] E [glusterd-handler.c:687:glusterd_op_txn_begin] 0-management: Unable to acquire lock for test-vol

Comment 2 Avra Sengupta 2014-06-02 12:02:24 UTC
Fix at https://code.engineering.redhat.com/gerrit/26087

Comment 5 Lalatendu Mohanty 2014-06-05 14:25:16 UTC
Not seeing this issue on glusterfs-3.6.0.12-1.el6rhs. Hence marking it verified.

Comment 7 errata-xmlrpc 2014-09-22 19:39:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.