Bug 1115091 - [vdsm] [gluster] Vdsm not operatinal after installation with gluster service enabled
Summary: [vdsm] [gluster] Vdsm not operatinal after installation with gluster service ...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: oVirt
Classification: Retired
Component: vdsm
Version: 3.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.5.0
Assignee: Bala.FA
QA Contact: Gil Klein
URL:
Whiteboard: gluster
Depends On: 1108448
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-01 14:24 UTC by Piotr Kliczewski
Modified: 2016-02-10 19:28 UTC (History)
10 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2014-09-23 06:26:37 UTC
oVirt Team: Gluster
Embargoed:


Attachments (Terms of Use)
supervdsm log file (77.88 KB, text/x-log)
2014-07-02 07:04 UTC, Piotr Kliczewski
no flags Details

Description Piotr Kliczewski 2014-07-01 14:24:32 UTC
Description of problem:

I wanted to test gluster related features but after host installation I noticed that host is not-operational.


How reproducible:

Steps to Reproduce:
I performed following steps on my f20 using xmlrpc:

1. Installed ovirt 3.5 repo.
2. Installed engine
3. Installed vdsm on the same host - status UP
4. Removed vdsm
5. Enabled gluster service
6. Installed vdsm again (tried several times with the same result)

Actual results:
Host set as Not-operational

Expected results:
Host status is UP

Additional info:
I can see gluserd and glusterfsd services being active.

Engine:
2014-07-01 10:38:53,722 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: Host fedora's
following network(s) are not synchronized with their Logical Network
configuration: ovirtmgmt.

vdsm:

Thread-13::DEBUG::2014-07-01
10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
('glusterfs-geo-replication',) not found

Thread-13::ERROR::2014-07-01
10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
occured
Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1110, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
    return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterPeerStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
    raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.

Comment 1 Piotr Kliczewski 2014-07-01 14:29:19 UTC
After setting SELinux mode to permissive. I was able to check peer status.

Comment 2 Dan Kenigsberg 2014-07-01 18:18:48 UTC
Could you attach your supervdsm.log?

Comment 3 Piotr Kliczewski 2014-07-02 07:04:49 UTC
Created attachment 914001 [details]
supervdsm log file

Comment 4 Bala.FA 2014-07-02 09:57:47 UTC
After analysing logs, it looks like glusterd is not running

GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1

Can you give output of `service glusterd status`?
If not running, can you start glusterd service?

Comment 5 Piotr Kliczewski 2014-07-02 10:58:38 UTC
Here is the output from running service status and glusrer command without changing SELinux settings.

[root@f20 ~]# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Tue 2014-07-01 11:12:29 CEST; 4h 9min ago
 Main PID: 31056 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─31056 /usr/sbin/glusterd -p /run/glusterd.pid

Jul 01 11:12:29 f20.example.com systemd[1]: Started GlusterFS, a
clustered file-system server.
Jul 01 11:12:29 f20.example.com python[31062]: SELinux is preventing
/usr/sbin/glusterfsd from write access on the sock_file .

                                               *****  Plugin catchall
(100. confidence) suggests   **************************...
Hint: Some lines were ellipsized, use -l to show in full.

[root@f20 ~]# gluster peer status
Connection failed. Please check if gluster daemon is operational.

Comment 6 Bala.FA 2014-07-02 11:11:36 UTC
Two problems here.

1. glusterd is not running.  Fix: Please start it.  vdsm will work normally.

2. SELinux.  gluster doesn't support SELinux yet.  Please enable permissive mode.


Hope this helps.

Comment 7 Sahina Bose 2014-09-23 06:26:37 UTC
This issue was due to SELinux being enabled on the node with gluster, as per Comment 6.

This is a known limitation with glusterfs. Closing this Deferred till this issue is resolved by glusterfs


Note You need to log in before you can comment on or make changes to this bug.