Bug 1221743 - glusterd not starting after a fresh install of 3.7.0-1.el6rhs build due to missing library files
Summary: glusterd not starting after a fresh install of 3.7.0-1.el6rhs build due to mi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: build
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Bala.FA
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1223636
TreeView+ depends on / blocked
 
Reported: 2015-05-14 17:27 UTC by Prasanth
Modified: 2015-11-23 02:59 UTC (History)
14 users (show)

Fixed In Version: glusterfs-3.7.0-2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 04:43:05 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Prasanth 2015-05-14 17:27:42 UTC
Description of problem:

glusterd not starting after a fresh install of 3.7.0-1.el6rhs build

Version-Release number of selected component (if applicable):
# rpm -qa |grep gluster
glusterfs-libs-3.7.0-1.el6rhs.x86_64
glusterfs-cli-3.7.0-1.el6rhs.x86_64
glusterfs-3.7.0-1.el6rhs.x86_64
glusterfs-fuse-3.7.0-1.el6rhs.x86_64
glusterfs-server-3.7.0-1.el6rhs.x86_64
glusterfs-api-3.7.0-1.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-1.el6rhs.x86_64

How reproducible: 100%


Steps to Reproduce:
1. Install the first available RHGS3.1 downstream build
2. Try to start glusterd service
3.

Actual results:
[root@dhcp42-183 /]# service glusterd status
glusterd is stopped

[root@dhcp42-183 /]# service glusterd start
Starting glusterd:                                         [FAILED]

[root@dhcp42-183 /]# service glusterd restart
Starting glusterd:                                         [FAILED]
-------

From logs:
#####
[2015-05-14 13:48:26.814890] I [MSGID: 100030] [glusterfsd.c:2294:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.0 (args: /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid)
[2015-05-14 13:48:26.821963] I [glusterd.c:1282:init] 0-management: Maximum allowed open file descriptors set to 65536
[2015-05-14 13:48:26.822024] I [glusterd.c:1327:init] 0-management: Using /var/lib/glusterd as working directory
[2015-05-14 13:48:26.822472] E [rpc-transport.c:291:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.7.0/rpc-transport/socket.so: cannot open shared object file: No such file or directory
[2015-05-14 13:48:26.822502] W [rpc-transport.c:295:rpc_transport_load] 0-rpc-transport: volume 'socket.management': transport-type 'socket' is not valid or not found on this machine
[2015-05-14 13:48:26.822525] W [rpcsvc.c:1595:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed
[2015-05-14 13:48:26.822542] E [glusterd.c:1509:init] 0-management: creation of listener failed
[2015-05-14 13:48:26.822557] E [xlator.c:426:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
[2015-05-14 13:48:26.822572] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed
[2015-05-14 13:48:26.822586] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
[2015-05-14 13:48:26.823128] W [glusterfsd.c:1219:cleanup_and_exit] (--> 0-: received signum (0), shutting down
#####


Expected results: glusterd should start succesfully


Additional info:

Comment 1 Prasanth 2015-05-21 06:10:28 UTC
Verified as fixed in glusterfs-3.7.0-2

Comment 2 errata-xmlrpc 2015-07-29 04:43:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.