Bug 999795 - RHS-C: List gluster hook fails when some of the hooks directories (post/pre) are missing in the RHS nodes [NEEDINFO]
RHS-C: List gluster hook fails when some of the hooks directories (post/pre) ...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: RHGS 2.1.2
Assigned To: Timothy Asir
Prasanth
: ZStream
Depends On: 998514 1018076
Blocks: 1028982
  Show dependency treegraph
 
Reported: 2013-08-22 03:12 EDT by Prasanth
Modified: 2015-05-13 12:32 EDT (History)
10 users (show)

See Also:
Fixed In Version: CB5
Doc Type: Bug Fix
Doc Text:
Previously, the Gluster Hooks List was not displayed if some of the hooks directories (post/pre) were missing in the Red Hat Storage nodes. Now, with this update, the Gluster Hooks List is displayed.
Story Points: ---
Clone Of: 998514
: 1028982 (view as bug list)
Environment:
Last Closed: 2014-02-25 02:35:23 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
sharne: needinfo? (tjeyasin)


Attachments (Terms of Use)

  None (edit)
Description Prasanth 2013-08-22 03:12:23 EDT
+++ This bug was initially created as a clone of Bug #998514 +++

Description of problem:

Sync hooks fails with the following exception in the vdsm log:

--------------------------
Thread-331::DEBUG::2013-08-19 18:31:49,451::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call hooksList with () {} flowID [2f5ba54e]
Thread-331::ERROR::2013-08-19 18:31:49,453::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 221, in hooksList
    status = self.svdsmProxy.glusterHooksList()
  File "/usr/share/vdsm/supervdsm.py", line 76, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 67, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterHooksList
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
    raise convert_to_error(kind, result)
GlusterHookListException: List gluster hook failed
error: [Errno 2] No such file or directory: '/var/lib/glusterd/hooks/1/start/pre'
Thread-333::DEBUG::2013-08-19 18:31:52,161::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call hostsList with () {}
Thread-333::DEBUG::2013-08-19 18:31:52,399::BindingXMLRPC::920::vds::(wrapper) return hostsList with {'status': {'message': 'Done', 'code': 0}, 'hosts': [{'status': 'CONNECTED', 'hostname': '10.70.36.75', 'uuid': '4d37d7f0-ab66-4f13-bd0f-9a5847ca3b2e'}]}
Thread-334::DEBUG::2013-08-19 18:31:52,405::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call volumesList with () {}
Thread-334::DEBUG::2013-08-19 18:31:52,528::BindingXMLRPC::920::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'vol1': {'transportType': ['TCP'], 'uuid': 'edef90bb-fb07-4c7a-a777-98e77b496d48', 'bricks': ['vm03.lab.eng.blr.redhat.com:/tmp/1', 'vm03.lab.eng.blr.redhat.com:/tmp/11'], 'volumeName': 'vol1', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'auth.allow': '*', 'nfs.disable': 'off', 'user.cifs': 'enable'}}, 'vol2': {'transportType': ['TCP'], 'uuid': '0fdfc0d2-6ce1-45bd-bd9e-f5802e1289ad', 'bricks': ['vm03.lab.eng.blr.redhat.com:/tmp/111', 'vm03.lab.eng.blr.redhat.com:/tmp/1111'], 'volumeName': 'vol2', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'auth.allow': '*', 'nfs.disable': 'off', 'user.cifs': 'disable'}}}}
--------------------------

Version-Release number of selected component (if applicable):  Red Hat Storage Console Version: 2.1.0-0.bb9.el6rhs 


How reproducible: Always


Steps to Reproduce:
1. Create a cluster and add the latest RHS node RHS-2.1-20130814.n.0-RHS-x86_64-DVD1.iso) 
2. Select the cluster and click on "Gluster Hooks"
3. Click on "Sync" and watch the vdsm.log

Actual results: Sync gluster hooks is failing


Expected results: Sync gluster hooks shouldn't fail to fetch the hooks


Additional info: vdsm.log attached

--- Additional comment from Prasanth on 2013-08-19 09:19:09 EDT ---



--- Additional comment from RHEL Product and Program Management on 2013-08-19 11:02:19 EDT ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from Prasanth on 2013-08-20 02:52:33 EDT ---

On further debugging, I could see that the following directories are missing in the RHS ISO and since vdsm checks for the existence of these, hookList fails:

------
/var/lib/glusterd/hooks/1/start/pre
/var/lib/glusterd/hooks/1/stop/post
/var/lib/glusterd/hooks/1/gsync-create/pre
/var/lib/glusterd/hooks/1/set/pre
------

So we need to confirm if the missing of these directories were intentional in the latest build or a bug in rpmbuild itself. 


PS: If I manually creates those missing directories in the RHS nodes, Sync works fine!

--- Additional comment from Prasanth on 2013-08-20 02:57:55 EDT ---

# pwd
/var/lib/glusterd/hooks/1
[root@rhs-client31 1]# ls -al *
gsync-create:
total 12
drwxr-xr-x. 3 root root 4096 Aug 12 12:23 .
drwxr-xr-x. 6 root root 4096 Aug 12 12:23 ..
drwxr-xr-x. 2 root root 4096 Aug 12 12:23 post

set:
total 12
drwxr-xr-x. 3 root root 4096 Aug 12 12:23 .
drwxr-xr-x. 6 root root 4096 Aug 12 12:23 ..
drwxr-xr-x. 2 root root 4096 Aug 12 12:23 post

start:
total 12
drwxr-xr-x. 3 root root 4096 Aug 12 12:23 .
drwxr-xr-x. 6 root root 4096 Aug 12 12:23 ..
drwxr-xr-x. 2 root root 4096 Aug 12 12:34 post

stop:
total 12
drwxr-xr-x. 3 root root 4096 Aug 12 12:23 .
drwxr-xr-x. 6 root root 4096 Aug 12 12:23 ..
drwxr-xr-x. 2 root root 4096 Aug 12 12:23 pre

--- Additional comment from Amar Tumballi on 2013-08-20 04:49:54 EDT ---

Prashanth,

What does find /var/lib/glusterd/hooks/1/ show?

There should be 4 hooks file at the moment in RHS, and I guess the directories appropriate.

Regards,
Amar

--- Additional comment from Prasanth on 2013-08-20 05:01:30 EDT ---

(In reply to Amar Tumballi from comment #5)
> Prashanth,
> 
> What does find /var/lib/glusterd/hooks/1/ show?
> 
> There should be 4 hooks file at the moment in RHS, and I guess the
> directories appropriate.


Amar,

I could see 6 hook files and its directories as given below:

----
# find /var/lib/glusterd/hooks/1/
/var/lib/glusterd/hooks/1/
/var/lib/glusterd/hooks/1/start
/var/lib/glusterd/hooks/1/start/post
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
/var/lib/glusterd/hooks/1/stop
/var/lib/glusterd/hooks/1/stop/pre
/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
/var/lib/glusterd/hooks/1/gsync-create
/var/lib/glusterd/hooks/1/gsync-create/post
/var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
/var/lib/glusterd/hooks/1/set
/var/lib/glusterd/hooks/1/set/post
/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
----

But earlier (till few builds back), directories existed even if hook file was not present in it by default. But that doesn't seem to be the case now. So wanted to confirm if something changed in between?

-Prasanth

--- Additional comment from Amar Tumballi on 2013-08-20 05:54:19 EDT ---

Bala, can we create the missing directories in %post part of glusterfs-server package?

--- Additional comment from Bala.FA on 2013-08-20 07:17:13 EDT ---


Not in %post.  It will be installed by glusterfs-server.rpm

--- Additional comment from Amar Tumballi on 2013-08-21 04:46:47 EDT ---

https://code.engineering.redhat.com/gerrit/#/c/11618

--- Additional comment from Prasanth on 2013-08-21 07:22:34 EDT ---

Verified as fixed in glusterfs-3.4.0.21rhs-1

------------------
[root@vm03 ]# rpm -qa |grep glusterfs
glusterfs-fuse-3.4.0.21rhs-1.el6rhs.x86_64
samba-glusterfs-3.6.9-159.1.el6rhs.x86_64
glusterfs-libs-3.4.0.21rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.21rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.21rhs-1.el6rhs.x86_64
glusterfs-api-3.4.0.21rhs-1.el6rhs.x86_64
glusterfs-3.4.0.21rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.21rhs-1.el6rhs.x86_64


[root@vm03 ]# find /var/lib/glusterd/hooks/1/
/var/lib/glusterd/hooks/1/
/var/lib/glusterd/hooks/1/create
/var/lib/glusterd/hooks/1/create/pre
/var/lib/glusterd/hooks/1/create/post
/var/lib/glusterd/hooks/1/set
/var/lib/glusterd/hooks/1/set/pre
/var/lib/glusterd/hooks/1/set/post
/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
/var/lib/glusterd/hooks/1/start
/var/lib/glusterd/hooks/1/start/pre
/var/lib/glusterd/hooks/1/start/post
/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
/var/lib/glusterd/hooks/1/remove-brick
/var/lib/glusterd/hooks/1/remove-brick/pre
/var/lib/glusterd/hooks/1/remove-brick/post
/var/lib/glusterd/hooks/1/delete
/var/lib/glusterd/hooks/1/delete/pre
/var/lib/glusterd/hooks/1/delete/post
/var/lib/glusterd/hooks/1/add-brick
/var/lib/glusterd/hooks/1/add-brick/pre
/var/lib/glusterd/hooks/1/add-brick/post
/var/lib/glusterd/hooks/1/gsync-create
/var/lib/glusterd/hooks/1/gsync-create/pre
/var/lib/glusterd/hooks/1/gsync-create/post
/var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
/var/lib/glusterd/hooks/1/stop
/var/lib/glusterd/hooks/1/stop/pre
/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
/var/lib/glusterd/hooks/1/stop/post
------------------

================================

But considering this change is coming from upstream, we need to take care of handling this in the future release of RHS-C.
Comment 2 Dusmant 2013-08-23 00:16:22 EDT
Amar, as we discussed we will handle this issue ( Prasanth is going to create a separate bug ) in the next release of RHS-C. But you would be taking care of it in the Big Bend release in RHS.
Comment 3 Prasanth 2013-08-23 02:10:37 EDT
(In reply to Dusmant from comment #2)
> Amar, as we discussed we will handle this issue ( Prasanth is going to
> create a separate bug ) in the next release of RHS-C. But you would be
> taking care of it in the Big Bend release in RHS.

Dusmant, this is the tracker bug I opened for RHS-C that has to be targeted for Corbett. The original Bug 998514 was already fixed and verified in the latest glusterfs build.
Comment 4 Dusmant 2013-10-07 04:12:40 EDT
Considering U1 and U2 are based out of BB release, the issue will not exist ( because the upstream change is altered in BB release and that will continue the same way in update releases as well, till the upstream base is changed ). So, this issue would not exist in U1 or U2 either. Hence moving it to future release...
Comment 5 Dusmant 2013-10-07 06:20:18 EDT
Final discussion came to a conclusion that it's good to have this VDSM fix, just in case end-user manually deletes any of the pre or post directories. So, we will take it up in Corbet, which is GA release.
Comment 6 Prasanth 2013-10-17 09:19:08 EDT
Sync is still failing with "Error while executing action RefreshGlusterHooks: Internal Engine Error". So moving back to Assigned.

PS: Is that due to the same issue mentioned in Bug 1018076 ??
Comment 7 Timothy Asir 2013-10-23 06:53:13 EDT
Yes
Comment 8 Dustin Tsang 2013-10-28 13:00:44 EDT
verified in cb5:
steps to verify

pre: create a 2 node cluster. in one of the nodes...
1. find /var/lib/glusterd/hooks/ -name "*post"  | xargs rm -rf *
2. in the gui, cluster main tab -> gluster hooks sub tab, click sync
=> (yields no error)
3. find /var/lib/glusterd/hooks/ -name "*pre"  | xargs rm -rf *
4. in the gui, cluster main tab -> gluster hooks sub tab, click sync
=> (yields no error)
Comment 9 Shalaka 2014-01-13 04:27:22 EST
Please review the edited DocText and signoff.
Comment 11 errata-xmlrpc 2014-02-25 02:35:23 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.