Bug 1028972 - [RHS-C] Error while executing action Add Gluster Hook: Internal Engine Error
Summary: [RHS-C] Error while executing action Add Gluster Hook: Internal Engine Error
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.3.0
Assignee: Darshan
QA Contact: SATHEESARAN
URL:
Whiteboard: gluster
Depends On: 1024263
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-11 11:48 UTC by Prasanth
Modified: 2016-02-10 18:58 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
The hookLevel parameter must be in lower case for creating a correct path for the verb hookAdd. This parameter has been changed to lower case, so adding a hook works as expected.
Clone Of: 1024263
Environment:
Last Closed: 2014-01-21 16:21:13 UTC
oVirt Team: Gluster
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0040 0 normal SHIPPED_LIVE vdsm bug fix and enhancement update 2014-01-21 20:26:21 UTC
oVirt gerrit 21017 0 None None None Never
oVirt gerrit 21538 0 None None None Never

Description Prasanth 2013-11-11 11:48:18 UTC
+++ This bug was initially created as a clone of Bug #1024263 +++

Description of problem:

Error while executing action Add Gluster Hook: Internal Engine Error

Version-Release number of selected component (if applicable): Red Hat Storage Console Version: 2.1.2-0.21.beta1.el6_4

[root@vm10 1]# rpm -qa |grep glusterfs
glusterfs-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-libs-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.35.1u2rhs-1.el6rhs.x86_64
glusterfs-api-3.4.0.35.1u2rhs-1.el6rhs.x86_64
samba-glusterfs-3.6.9-160.3.el6rhs.x86_64
 
[root@vm10 1]# rpm -qa |grep vdsm
vdsm-python-cpopen-4.13.0-19.gitc2a87f5.el6rhs.x86_64
vdsm-python-4.13.0-19.gitc2a87f5.el6rhs.x86_64
vdsm-4.13.0-19.gitc2a87f5.el6rhs.x86_64
vdsm-reg-4.13.0-19.gitc2a87f5.el6rhs.noarch
vdsm-xmlrpc-4.13.0-19.gitc2a87f5.el6rhs.noarch
vdsm-cli-4.13.0-19.gitc2a87f5.el6rhs.noarch
vdsm-gluster-4.13.0-19.gitc2a87f5.el6rhs.noarch



How reproducible: Always


Steps to Reproduce:
1. Create a cluster and add 2 RHS U2 nodes
2. Select the cluster, click on the "Gluster Hooks" sub-tab and click on "Sync"
3. Now, go to one of the server and execute # rm -rf /var/lib/glusterd/hooks/1/*
4. Again click on "Sync" and now the hooks will be in "missing conflict" state.
5. Select any hook and click on "Resolve Conflicts"
6. Resolve Missing Hook Conflict using "Copy the hook to all the servers" option and click on OK.

Here you will see the error. 

Actual results: Resolve Missing Hook Conflict fails to create the proper hooks directory and hence fails. Following is what vdsm.log says

-----------
Thread-3123::ERROR::2013-10-29 14:30:41,707::BindingXMLRPC::993::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 979, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 267, in hookAdd
    hookData, hookMd5Sum, enable)
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterHookAdd
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
    raise convert_to_error(kind, result)
OSError: [Errno 2] No such file or directory: '/var/lib/glusterd/hooks/1/set/post/tmpwfVnUZ'
Thread-3124::DEBUG::2013-10-29 14:30:43,361::BindingXMLRPC::974::vds::(wrapper) client [10.70.36.27]::call hostsList with () {}
-----------

It basically created the "POST" directory in caps and hence failed. See below:

----
[root@vm10 set]# pwd
/var/lib/glusterd/hooks/1/set

[root@vm10 set]# ll
total 4
drwxr-xr-x 2 root root 4096 Oct 29 14:29 POST
----

Expected results: Resolve hooks should create all the missing directories/files with proper name and permission.


Additional info: Screenshot is attached and sosreports will be attached soon.

--- Additional comment from RHEL Product and Program Management on 2013-10-29 05:35:49 EDT ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

Comment 1 Prasanth 2013-11-11 11:52:42 UTC
Red Hat Enterprise Virtualization Manager Version: 3.3.0-0.31.beta1.el6ev

Comment 3 SATHEESARAN 2013-12-13 13:01:29 UTC
Tested with RHEVM IS27 (3.3.0-0.40.rc.el6ev)  and glusterfs-3.4.0.49rhs-1.el6rhs, with the following steps,

0. Created a posixfs data center of compatibility 3.2
1. Created a gluster enabled cluster with compatibility 3.3
2. Added 4 RHSS 2.1 Update2 Nodes, one after other, to the gluster enabled cluster
3. Select the cluster, click on the "Gluster Hooks" sub-tab and click on "Sync"
4. Now, going to one of the server and execute # rm -rf /var/lib/glusterd/hooks/1/*
5. Again click on "Sync" and now the hooks will be in "missing conflict" state.
6. Select any hook and click on "Resolve Conflicts"
7. Resolve Missing Hook Conflict using "Copy the hook to all the servers" option and click on OK.

All the operations was perfect and there were no errors, while copying the hook to all the servers, where it is missing.

Marking this bug as VERIFIED

Comment 4 errata-xmlrpc 2014-01-21 16:21:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0040.html


Note You need to log in before you can comment on or make changes to this bug.