Bug 874498

Summary: execstack shows that the stack is executable for some of the libraries
Product: [Community] GlusterFS Reporter: Pranith Kumar K <pkarampu>
Component: unclassifiedAssignee: Vijay Bellur <vbellur>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: gluster-bugs, vsomyaju
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.5.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-04-17 11:39:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pranith Kumar K 2012-11-08 10:26:28 UTC
Description of problem:
[root@pranithk-laptop 3git]# for i in `find /usr/local/lib/glusterfs/3git -iname "*so" `; do execstack $i; done | grep "^X"
X /usr/local/lib/glusterfs/3git/xlator/mount/fuse.so
X /usr/local/lib/glusterfs/3git/xlator/storage/posix.so
X /usr/local/lib/glusterfs/3git/xlator/protocol/server.so
X /usr/local/lib/glusterfs/3git/xlator/features/index.so

This is happening because of the usage of inner functions in C. Please remove that usage to fix the issue.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 vsomyaju 2012-11-12 08:33:44 UTC
I have put the  _check_key_is_zero_filled function outside the  xattrop_index_action function. So the above script won't include the "X /usr/local/lib/glusterfs/3git/xlator/features/index.so" as output in the output list.

Comment 2 Vijay Bellur 2012-11-21 06:36:02 UTC
CHANGE: http://review.gluster.org/4182 (Put _check_key_is_zero_filled outside _xattrop_index_action) merged in master by Vijay Bellur (vbellur)

Comment 3 Anand Avati 2013-04-09 18:58:41 UTC
REVIEW: http://review.gluster.org/4796 (tests: fix dependency on sleep in bug-874498.t) posted (#1) for review on master by Anand Avati (avati)

Comment 4 Anand Avati 2013-04-09 19:55:51 UTC
COMMIT: http://review.gluster.org/4796 committed in master by Anand Avati (avati) 
------
commit f364d542aaf272c14b1d6ef7c9ac805db0fdb45c
Author: Anand Avati <avati>
Date:   Fri Apr 5 16:26:53 2013 -0700

    tests: fix dependency on sleep in bug-874498.t
    
    With the introduction of http://review.gluster.org/4784, there are
    delays which breaks bug-874498.t which wrongly depends on healing
    to finish within 2 seconds.
    
    Fix this by using 'EXPECT_WITHIN 60' instead of sleep 2.
    
    Change-Id: I2716d156c977614c719665a5e1f159dabf2878b5
    BUG: 874498
    Signed-off-by: Anand Avati <avati>
    Reviewed-on: http://review.gluster.org/4796
    Reviewed-by: Jeff Darcy <jdarcy>
    Tested-by: Gluster Build System <jenkins.com>

Comment 5 Anand Avati 2013-04-10 00:36:19 UTC
REVIEW: http://review.gluster.org/4798 (tests: fix further issues with bug-874498.t) posted (#1) for review on master by Anand Avati (avati)

Comment 6 Anand Avati 2013-04-10 04:44:09 UTC
COMMIT: http://review.gluster.org/4798 committed in master by Anand Avati (avati) 
------
commit a216f5f44675bfe189c318171dbc50e1c19bfc26
Author: Anand Avati <avati>
Date:   Tue Apr 9 17:22:01 2013 -0700

    tests: fix further issues with bug-874498.t
    
    The failure of bug-874498.t seems to be a "bug" in glustershd.
    The situation seems to be when both subvolumes of a replica are
    "local" to glustershd, and in such cases glustershd is sensitive
    to the order in which the subvols come up.
    
    The core of the issue itself is that, without the patch (#4784),
    self-heal daemon completes the processing of index and no entries
    are left inside the xattrop index after a few seconds of volume
    start force. However with the patch, the stale "backing file"
    (against which index performs link()) is left. The likely reason
    is that an "INDEX" based crawl is not happening against the subvol
    when this patch is applied.
    
    Before #4784 patch, the order in which subvols came up was :
    
      [2013-04-09 22:55:35.117679] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49156, attached to remote volume '/d/backends/brick1'.
      ...
      [2013-04-09 22:55:35.118399] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49157, attached to remote volume '/d/backends/brick2'.
    
    However, with the patch, the order is reversed:
    
      [2013-04-09 22:53:34.945370] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49153, attached to remote volume '/d/backends/brick2'.
      ...
      [2013-04-09 22:53:34.950966] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49152, attached to remote volume '/d/backends/brick1'.
    
    The index in brick2 has the list of files/gfid to heal. It appears
    to be the case that when brick1 is the first subvol to be detected
    as coming up, somehow an INDEX based crawl is clearing all the
    index entries in brick2, but if brick2 comes up as the first subvol,
    then the backing file is left stale.
    
    Also, doing a "gluster volume heal full" seems to leave out stale
    backing files too. As the crawl is performed on the namespace and
    the backing file is never encountered there to get cleared out.
    
    So the interim (possibly permanent) fix is to have the script issue
    a regular self-heal command (and not a "full" one).
    
    The failure of the script itself is non-critical. The data files are
    all healed, and it is just the backing file which is left behind. The
    stale backing file too gets cleared in the next index based healing,
    either triggered manually or after 10mins.
    
    Change-Id: I5deb79652ef449b7e88684311e804a8a2aa4725d
    BUG: 874498
    Signed-off-by: Anand Avati <avati>
    Reviewed-on: http://review.gluster.org/4798
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Jeff Darcy <jdarcy>

Comment 7 Anand Avati 2013-04-15 10:34:50 UTC
REVIEW: http://review.gluster.org/4831 (tests: fix further issues with bug-874498.t) posted (#1) for review on release-3.4 by Krishnan Parthasarathi (kparthas)

Comment 8 Anand Avati 2013-04-15 10:35:09 UTC
REVIEW: http://review.gluster.org/4832 (tests: fix dependency on sleep in bug-874498.t) posted (#1) for review on release-3.4 by Krishnan Parthasarathi (kparthas)

Comment 9 Anand Avati 2013-04-16 16:35:41 UTC
REVIEW: http://review.gluster.org/4831 (tests: fix further issues with bug-874498.t) posted (#2) for review on release-3.4 by Krishnan Parthasarathi (kparthas)

Comment 10 Anand Avati 2013-04-16 16:36:56 UTC
REVIEW: http://review.gluster.org/4832 (tests: fix dependency on sleep in bug-874498.t) posted (#2) for review on release-3.4 by Krishnan Parthasarathi (kparthas)

Comment 11 Anand Avati 2013-04-17 06:08:36 UTC
REVIEW: http://review.gluster.org/4831 (tests: fix further issues with bug-874498.t) posted (#3) for review on release-3.4 by Krishnan Parthasarathi (kparthas)

Comment 12 Anand Avati 2013-04-17 06:09:58 UTC
REVIEW: http://review.gluster.org/4832 (tests: fix dependency on sleep in bug-874498.t) posted (#3) for review on release-3.4 by Krishnan Parthasarathi (kparthas)

Comment 13 Anand Avati 2013-04-17 08:54:15 UTC
COMMIT: http://review.gluster.org/4831 committed in release-3.4 by Vijay Bellur (vbellur) 
------
commit 28da431e5bedba64380cc3886cbab03c0d7a3cfd
Author: Krishnan Parthasarathi <kparthas>
Date:   Mon Apr 15 15:51:14 2013 +0530

    tests: fix further issues with bug-874498.t
    
    The failure of bug-874498.t seems to be a "bug" in glustershd.
    The situation seems to be when both subvolumes of a replica are
    "local" to glustershd, and in such cases glustershd is sensitive
    to the order in which the subvols come up.
    
    The core of the issue itself is that, without the patch (#4784),
    self-heal daemon completes the processing of index and no entries
    are left inside the xattrop index after a few seconds of volume
    start force. However with the patch, the stale "backing file"
    (against which index performs link()) is left. The likely reason
    is that an "INDEX" based crawl is not happening against the subvol
    when this patch is applied.
    
    Before #4784 patch, the order in which subvols came up was :
    
      [2013-04-09 22:55:35.117679] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49156, attached to remote volume '/d/backends/brick1'.
      ...
      [2013-04-09 22:55:35.118399] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49157, attached to remote volume '/d/backends/brick2'.
    
    However, with the patch, the order is reversed:
    
      [2013-04-09 22:53:34.945370] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-1: Connected to 10.3.129.13:49153, attached to remote volume '/d/backends/brick2'.
      ...
      [2013-04-09 22:53:34.950966] I [client-handshake.c:1456:client_setvolume_cbk] 0-patchy-client-0: Connected to 10.3.129.13:49152, attached to remote volume '/d/backends/brick1'.
    
    The index in brick2 has the list of files/gfid to heal. It appears
    to be the case that when brick1 is the first subvol to be detected
    as coming up, somehow an INDEX based crawl is clearing all the
    index entries in brick2, but if brick2 comes up as the first subvol,
    then the backing file is left stale.
    
    Also, doing a "gluster volume heal full" seems to leave out stale
    backing files too. As the crawl is performed on the namespace and
    the backing file is never encountered there to get cleared out.
    
    So the interim (possibly permanent) fix is to have the script issue
    a regular self-heal command (and not a "full" one).
    
    The failure of the script itself is non-critical. The data files are
    all healed, and it is just the backing file which is left behind. The
    stale backing file too gets cleared in the next index based healing,
    either triggered manually or after 10mins.
    
    BUG: 874498
    Change-Id: I601e9adec46bb7f8ba0b1ba09d53b83bf317ab6a
    Original-author: Anand Avati <avati>
    Signed-off-by: Krishnan Parthasarathi <kparthas>
    Reviewed-on: http://review.gluster.org/4831
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 14 Anand Avati 2013-04-17 11:36:54 UTC
COMMIT: http://review.gluster.org/4832 committed in release-3.4 by Vijay Bellur (vbellur) 
------
commit 63098d9ff8dcfc08fd2ed83c62c4ffb63fc2126f
Author: Krishnan Parthasarathi <kparthas>
Date:   Mon Apr 15 15:52:53 2013 +0530

    tests: fix dependency on sleep in bug-874498.t
    
    With the introduction of http://review.gluster.org/4784, there are
    delays which breaks bug-874498.t which wrongly depends on healing
    to finish within 2 seconds.
    
    Fix this by using 'EXPECT_WITHIN 60' instead of sleep 2.
    
    BUG: 874498
    Change-Id: I7131699908e63b024d2dd71395b3e94c15fe925c
    Original-author: Anand Avati <avati>
    Signed-off-by: Krishnan Parthasarathi <kparthas>
    Reviewed-on: http://review.gluster.org/4832
    Reviewed-by: Vijay Bellur <vbellur>
    Tested-by: Gluster Build System <jenkins.com>

Comment 15 Pranith Kumar K 2014-01-03 12:46:31 UTC

Don't see this on latest master:
pk@pranithk-laptop - ~/workspace/gerrit-repo (master)
18:14:27 :) ⚡ for i in `find /usr/local/lib/glusterfs/3git -iname "*so" `; do execstack $i; done | grep "^X"

pk@pranithk-laptop - ~/workspace/gerrit-repo (master)
18:14:40 :( ⚡

Comment 16 Niels de Vos 2014-04-17 11:39:24 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user