Bug 1707246 - [glusterd]: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded
Summary: [glusterd]: While upgrading (3-node cluster) 'gluster v status' times out on ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Sanju
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On: 1710159 1722131
Blocks: 1696807
TreeView+ depends on / blocked
 
Reported: 2019-05-07 07:02 UTC by Rochelle
Modified: 2019-11-11 11:08 UTC (History)
9 users (show)

Fixed In Version: glusterfs-6.0-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1710159 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:21:23 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:21:41 UTC

Description Rochelle 2019-05-07 07:02:22 UTC
Description of problem:
=======================
While performing an in-service upgrade (for geo-rep) of a 3 node cluster, 
2 nodes (from the master) upgraded successfully .
The third node which was on a previous build (3.4.4), while running 'gluster v status' it times out.

[root@dhcp41-155]# gluster v status
Error : Request timed out

Other gluster commands seem to be working on this node.

The same command on the upgraded nodes work as expected: (as shown)
[root@dhcp42-173 glusterfs]# gluster v status
Status of volume: gluster_shared_storage
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.42.211:/var/lib/glusterd/ss_bri
ck                                          49152     0          Y       9661 
Brick 10.70.41.155:/var/lib/glusterd/ss_bri
ck                                          49155     0          Y       16101
Brick dhcp42-173.lab.eng.blr.redhat.com:/va
r/lib/glusterd/ss_brick                     49152     0          Y       4718 
Self-heal Daemon on localhost               N/A       N/A        Y       4809 
Self-heal Daemon on 10.70.41.155            N/A       N/A        Y       16524
Self-heal Daemon on 10.70.42.211            N/A       N/A        Y       9964 
 
Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: master
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.42.173:/rhs/brick1/b1           49153     0          Y       4748 
Brick 10.70.42.211:/rhs/brick1/b2           49153     0          Y       9698 
Brick 10.70.41.155:/rhs/brick1/b3           49152     0          Y       15867
Brick 10.70.42.173:/rhs/brick2/b4           49154     0          Y       4757 
Brick 10.70.42.211:/rhs/brick2/b5           49154     0          Y       9707 
Brick 10.70.41.155:/rhs/brick2/b6           49153     0          Y       15888
Brick 10.70.42.173:/rhs/brick3/b7           49155     0          Y       4764 
Brick 10.70.42.211:/rhs/brick3/b8           49155     0          Y       9722 
Brick 10.70.41.155:/rhs/brick3/b9           49154     0          Y       15909
Self-heal Daemon on localhost               N/A       N/A        Y       4809 
Self-heal Daemon on 10.70.42.211            N/A       N/A        Y       9964 
Self-heal Daemon on 10.70.41.155            N/A       N/A        Y       16524
 
Task Status of Volume master




Error messages in glusterd log (on the 3.4.4 node) are as follows:
------------------------------------------------------------------
[2019-05-07 06:29:06.143535] E [rpc-clnt.c:185:call_bail] 0-management: bailing out frame type(glusterd mgmt) op(--(4)) xid = 0x19 sent = 2019-05-07 06:18:58.538764. timeout = 600 for 10.70.42.173:24007
[2019-05-07 06:29:06.143630] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on dhcp42-173.lab.eng.blr.redhat.com. Please check log file for details.
[2019-05-07 06:29:06.144183] I [socket.c:3699:socket_submit_reply] 0-socket.management: not connected (priv->connected = -1)
[2019-05-07 06:29:06.144234] E [rpcsvc.c:1573:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0x2, Program: GlusterD svc cli, ProgVers: 2, Proc: 27) to rpc-transport (socket.management)
[2019-05-07 06:29:06.144279] E [MSGID: 106430] [glusterd-utils.c:560:glusterd_submit_reply] 0-glusterd: Reply submission failed
[2019-05-07 06:42:46.327616] E [rpc-clnt.c:185:call_bail] 0-management: bailing out frame type(glusterd mgmt) op(--(4)) xid = 0x1b sent = 2019-05-07 06:32:44.342818. timeout = 600 for 10.70.42.173:24007
[2019-05-07 06:42:46.327901] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on dhcp42-173.lab.eng.blr.redhat.com. Please check log file for details.
[2019-05-07 06:42:50.328686] E [rpc-clnt.c:185:call_bail] 0-management: bailing out frame type(glusterd mgmt) op(--(4)) xid = 0x1a sent = 2019-05-07 06:32:44.342952. timeout = 600 for 10.70.42.211:24007
[2019-05-07 06:42:50.328839] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on 10.70.42.211. Please check log file for details.
[2019-05-07 06:42:50.329321] I [socket.c:3699:socket_submit_reply] 0-socket.management: not connected (priv->connected = -1)
[2019-05-07 06:42:50.329356] E [rpcsvc.c:1573:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0x2, Program: GlusterD svc cli, ProgVers: 2, Proc: 27) to rpc-transport (socket.management)
[2019-05-07 06:42:50.329402] E [MSGID: 106430] [glusterd-utils.c:560:glusterd_submit_reply] 0-glusterd: Reply submission failed


On the upgraded node: (glusterd log)
----------------------
[2019-05-07 06:18:58.535711] E [glusterd-op-sm.c:8193:glusterd_op_sm] (-->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x24c8e) [0x7f6ae684ec8e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x1d05e) [0x7f6ae684705e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x444ff) [0x7f6ae686e4ff] ) 0-management: Unable to get transaction opinfo for transaction ID :bc0a5ca5-f3a1-4c27-b263-d2d34289cbe3
[2019-05-07 06:32:44.339866] E [glusterd-op-sm.c:8193:glusterd_op_sm] (-->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x24c8e) [0x7f6ae684ec8e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x1d05e) [0x7f6ae684705e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x444ff) [0x7f6ae686e4ff] ) 0-management: Unable to get transaction opinfo for transaction ID :949c5d20-fb42-4d4d-8175-3e3dc158d310
[2019-05-07 06:43:02.694136] E [glusterd-op-sm.c:8193:glusterd_op_sm] (-->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x24c8e) [0x7f6ae684ec8e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x1d05e) [0x7f6ae684705e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x444ff) [0x7f6ae686e4ff] ) 0-management: Unable to get transaction opinfo for transaction ID :d150e923-3749-439a-ae3a-82ce997c8608




Version-Release number of selected component (if applicable):
==============================================================

UPGRADED NODE ==> [root@dhcp42-173]# rpm -qa | grep gluster
python2-gluster-6.0-2.el7rhgs.x86_64
vdsm-gluster-4.19.43-2.3.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.7.x86_64
glusterfs-cli-6.0-2.el7rhgs.x86_64
glusterfs-6.0-2.el7rhgs.x86_64
glusterfs-fuse-6.0-2.el7rhgs.x86_64
glusterfs-geo-replication-6.0-2.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
glusterfs-libs-6.0-2.el7rhgs.x86_64
glusterfs-api-6.0-2.el7rhgs.x86_64
glusterfs-events-6.0-2.el7rhgs.x86_64
tendrl-gluster-integration-1.6.3-12.el7rhgs.noarch
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-server-6.0-2.el7rhgs.x86_64
glusterfs-client-xlators-6.0-2.el7rhgs.x86_64
glusterfs-rdma-6.0-2.el7rhgs.x86_64


NODE TO BE UPGRADED ==> [root@dhcp41-155]# rpm -qa | grep gluster
glusterfs-events-3.12.2-47.el7rhgs.x86_64
glusterfs-rdma-3.12.2-47.el7rhgs.x86_64
vdsm-gluster-4.19.43-2.3.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.7.x86_64
glusterfs-cli-3.12.2-47.el7rhgs.x86_64
glusterfs-fuse-3.12.2-47.el7rhgs.x86_64
glusterfs-server-3.12.2-47.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
glusterfs-libs-3.12.2-47.el7rhgs.x86_64
glusterfs-3.12.2-47.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-47.el7rhgs.x86_64
tendrl-gluster-integration-1.6.3-12.el7rhgs.noarch
glusterfs-client-xlators-3.12.2-47.el7rhgs.x86_64
glusterfs-api-3.12.2-47.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
python2-gluster-3.12.2-47.el7rhgs.x86_64


How reproducible:
=================
1/1


Steps to Reproduce:
====================
In-service upgrade (geo-rep)

Actual results:
===============
While performing an in-service upgrade on a 3 node cluster, the third node which was on a previous build (3.4.4), while running 'gluster v status' it times out.

Expected results:
=================
'gluster v status' should not time out

Additional info:
===============
In-service upgrade was still in progress and hence cluster.op-version is still on the previous op-version. 
The cluster is essentially in the middle of an in-service upgrade scenario.

Comment 11 Sanju 2019-05-15 03:44:19 UTC
Root cause:
With commit 34e010d64, we have added some conditions to set txn-opinfo to avoid the memory leak in txn-opinfo object. But, in a heterogeneous cluster the upgraded and non-upgraded nodes are following different conditions to set txn-opinfo. This is leading the get-txn-opinfo operation to fail and eventually the process hungs.

[root@server2 glusterfs]# git show 34e010d64
commit 34e010d64905b7387de57840d3fb16a326853c9b
Author: Atin Mukherjee <amukherj>
Date:   Mon Mar 18 16:08:04 2019 +0530

    glusterd: fix txn-id mem leak
    
    This commit ensures the following:
    1. Don't send commit op request to the remote nodes when gluster v
    status all is executed as for the status all transaction the local
    commit gets the name of the volumes and remote commit ops are
    technically a no-op. So no need for additional rpc requests.
    2. In op state machine flow, if the transaction is in staged state and
    op_info.skip_locking is true, then no need to set the txn id in the
    priv->glusterd_txn_opinfo dictionary which never gets freed.
    
    Fixes: bz#1691164
    Change-Id: Ib6a9300ea29633f501abac2ba53fb72ff648c822
    Signed-off-by: Atin Mukherjee <amukherj>

diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 6495a9d88..84c34f1fe 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -5652,6 +5652,9 @@ glusterd_op_ac_stage_op(glusterd_op_sm_event_t *event, void *ctx)
     dict_t *dict = NULL;
     xlator_t *this = NULL;
     uuid_t *txn_id = NULL;
+    glusterd_op_info_t txn_op_info = {
+        {0},
+    };
 
     this = THIS;
     GF_ASSERT(this);
@@ -5686,6 +5689,7 @@ glusterd_op_ac_stage_op(glusterd_op_sm_event_t *event, void *ctx)
         ret = -1;
         goto out;
     }
+    ret = glusterd_get_txn_opinfo(&event->txn_id, &txn_op_info);
 
     ret = dict_set_bin(rsp_dict, "transaction_id", txn_id, sizeof(*txn_id));
     if (ret) {
@@ -5704,6 +5708,12 @@ out:
 
     gf_msg_debug(this->name, 0, "Returning with %d", ret);
 
+    /* for no volname transactions, the txn_opinfo needs to be cleaned up
+     * as there's no unlock event triggered
+     */
+    if (txn_op_info.skip_locking)
+        ret = glusterd_clear_txn_opinfo(txn_id);
+
     if (rsp_dict)
         dict_unref(rsp_dict);
 
@@ -8159,12 +8169,16 @@ glusterd_op_sm()
                            "Unable to clear "
                            "transaction's opinfo");
             } else {
-                ret = glusterd_set_txn_opinfo(&event->txn_id, &opinfo);
-                if (ret)
-                    gf_msg(this->name, GF_LOG_ERROR, 0,
-                           GD_MSG_TRANS_OPINFO_SET_FAIL,
-                           "Unable to set "
-                           "transaction's opinfo");
+                if (!(event_type == GD_OP_EVENT_STAGE_OP &&
+                      opinfo.state.state == GD_OP_STATE_STAGED &&
+                      opinfo.skip_locking)) {     <---- now, upgraded nodes will not set txn-opinfo when this condition is false, so the glusterd_get_txn_opinfo() after this is failing. previously we used to set txn-opinfo in every state of op-sm and glusterd_get_txn_opinfo will be called in every phase. We need to add an op-version check for this change.
+                    ret = glusterd_set_txn_opinfo(&event->txn_id, &opinfo);
+                    if (ret)
+                        gf_msg(this->name, GF_LOG_ERROR, 0,
+                               GD_MSG_TRANS_OPINFO_SET_FAIL,
+                               "Unable to set "
+                               "transaction's opinfo");
+                }
             }
 
             glusterd_destroy_op_event_ctx(event);
diff --git a/xlators/mgmt/glusterd/src/glusterd-syncop.c b/xlators/mgmt/glusterd/src/glusterd-syncop.c
index 45b221c2e..9bab2cfd5 100644
--- a/xlators/mgmt/glusterd/src/glusterd-syncop.c
+++ b/xlators/mgmt/glusterd/src/glusterd-syncop.c
@@ -1392,6 +1392,8 @@ gd_commit_op_phase(glusterd_op_t op, dict_t *op_ctx, dict_t *req_dict,
     char *errstr = NULL;
     struct syncargs args = {0};
     int type = GF_QUOTA_OPTION_TYPE_NONE;
+    uint32_t cmd = 0;
+    gf_boolean_t origin_glusterd = _gf_false;
 
     this = THIS;
     GF_ASSERT(this);
@@ -1449,6 +1451,20 @@ commit_done:
     gd_syncargs_init(&args, op_ctx);
     synctask_barrier_init((&args));
     peer_cnt = 0;
+    origin_glusterd = is_origin_glusterd(req_dict);
+
+    if (op == GD_OP_STATUS_VOLUME) {
+        ret = dict_get_uint32(req_dict, "cmd", &cmd);
+        if (ret)
+            goto out;
+
+        if (origin_glusterd) {
+            if ((cmd & GF_CLI_STATUS_ALL)) {
+                ret = 0;
+                goto out;
+            }
+        }
+    }
 
     RCU_READ_LOCK;
     cds_list_for_each_entry_rcu(peerinfo, &conf->peers, uuid_list)
(END)


patch https://review.gluster.org/#/c/glusterfs/+/22730 posted at upstream for review.

Thanks,
Sanju

Comment 17 errata-xmlrpc 2019-10-30 12:21:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.