Bug 1102656 - BVT: Volume top command for a wrong brick is causing cli to hang
Summary: BVT: Volume top command for a wrong brick is causing cli to hang
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard:
Depends On: 1101588
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-29 12:00 UTC by Avra Sengupta
Modified: 2014-11-11 08:33 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1101588
Environment:
Last Closed: 2014-11-11 08:33:46 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Avra Sengupta 2014-05-29 12:00:59 UTC
+++ This bug was initially created as a clone of Bug #1101588 +++

Description of problem:

Volume top tests are failing in BVT test as cli is timing out for below command and for subsequent test are failing as cli returning error "Another transaction is in progress for test-vol. Please try again after sometime."

The test is a negative test where, following command s run for a wrong brick.

gluster volume top $volname open brick $IP:/tmp/brick

Version-Release number of selected component (if applicable):

glusterfs-3.6.0.8-1.el6rhs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create volume, start it
2. Mount it on client. Run IO
for i in `seq 1 100`; do dd if=/dev/zero of=/mnt/fuse/$i bs=128K count=100 2>&1 1>/dev/null; dd if=/mnt/fuse/$i of=/dev/zero bs=128K count=100 2>&1 1>/dev/null; done
3.  gluster volume top $volname open
4. Run volume top for a brick which is not part of the volume.
gluster volume top $volname open brick $IP:/tmp/brick
5. gluster volume top $volname read

Actual results:

$gluster volume top $volname read
Another transaction is in progress for test-vol. Please try again after sometime.
volume top unsuccessful


Expected results:

The command "gluster volume top $volname read" should work fine.

Additional info:

From /var/log/glusterfs/etc-glusterfs-glusterd.vol.log:

[2014-05-27 14:18:59.938060] I [glusterd-handler.c:1367:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2014-05-27 14:18:59.940369] I [glusterd-handler.c:1367:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2014-05-27 14:19:36.337073] I [glusterd-handler.c:2709:__glusterd_handle_cli_profile_volume] 0-management: Received volume profile req for volume test-vol
[2014-05-27 14:19:36.340450] E [glusterd-rpc-ops.c:1866:glusterd_brick_op] 0-management: Failed to select bricks while performing brick op during 'Volume Profile'
[2014-05-27 14:19:36.340507] E [glusterd-op-sm.c:6516:glusterd_op_sm] 0-management: handler returned: -1
[2014-05-27 14:19:36.340537] E [glusterd-op-sm.c:207:glusterd_get_txn_opinfo] 0-: Unable to get transaction opinfo for transaction ID : 00000000-0000-0000-0000-000000000000
[2014-05-27 14:19:36.340553] E [glusterd-op-sm.c:6498:glusterd_op_sm] 0-management: Unable to get transaction's opinfo
[2014-05-27 14:30:42.115923] I [glusterd-handler.c:2709:__glusterd_handle_cli_profile_volume] 0-management: Received volume profile req for volume test-vol
[2014-05-27 14:30:42.116073] W [glusterd-locks.c:547:glusterd_mgmt_v3_lock] 0-management: Lock for test-vol held by 79f50508-9dc4-4f20-906b-2c8b5978ac31
[2014-05-27 14:30:42.116113] E [glusterd-handler.c:687:glusterd_op_txn_begin] 0-management: Unable to acquire lock for test-vol

Comment 1 Anand Avati 2014-05-29 12:16:25 UTC
REVIEW: http://review.gluster.org/7926 (glusterd: Fetching the txn_id before performing glusterd_op_bricks_select in glusterd_brick_op()) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 2 Anand Avati 2014-06-02 11:42:47 UTC
COMMIT: http://review.gluster.org/7926 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit e8c13fa9bd2a838335e923ec48bcb66e2cb5861d
Author: Avra Sengupta <asengupt>
Date:   Thu May 29 11:59:30 2014 +0000

    glusterd: Fetching the txn_id before performing glusterd_op_bricks_select in glusterd_brick_op()
    
    In glusterd_brick_op(), the txn_id mut be fetched before
    failing the transaction for any other reason. Moving
    the fetching of txn_id to the beginning of the function.
    
    Also initializing txn_id to priv->global_txn_id where it
    wasn't initialized.
    
    Change-Id: I44d7daa444f00a626f24670c92324725f6c5fb35
    BUG: 1102656
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/7926
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Atin Mukherjee <amukherj>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 3 Niels de Vos 2014-09-22 12:41:31 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 4 Niels de Vos 2014-11-11 08:33:46 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.