Bug 1475755 - gluster-blockd fails to start in RHGS container
gluster-blockd fails to start in RHGS container
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: CNS-deployment (Show other bugs)
cns-3.6
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Michael Adam
krishnaram Karthick
:
Depends On:
Blocks: 1445448
  Show dependency treegraph
 
Reported: 2017-07-27 05:27 EDT by krishnaram Karthick
Modified: 2017-09-07 10:00 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-09-07 05:07:41 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description krishnaram Karthick 2017-07-27 05:27:56 EDT
Description of problem:
Gluster-blockd service fails to start in RHGS container image as a dependency failure from rpcbind service. 

rpcbind service fails to start in a container as the port is already in use by  the rpcbind service running in the openshift node.

We'll need to reconfigure either of the ports to use a different port. This also means, we have to enable the new port number in firewall and document it. I'll raise a doc bug once we finalize what would be the port number.

Version-Release number of selected component (if applicable):
cns-deploy-5.0.0-12.el7rhgs.x86_64

How reproducible:
always

Steps to Reproduce:
1. configure cns
2. check if gluster-block service is up

Actual results:
gluster-blockd service is down

Expected results:
gluster-blockd service should be up

Additional info:
Comment 2 krishnaram Karthick 2017-07-27 05:33:11 EDT
sh-4.2# systemctl status rpcbind.socket -l
● rpcbind.socket - RPCbind Server Activation Socket
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.socket; enabled; vendor preset: enabled)
   Active: failed (Result: resources)
   Listen: /var/run/rpcbind.sock (Stream)
           [::]:111 (Stream)
           0.0.0.0:111 (Stream)

Jul 27 09:06:58 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: rpcbind.socket failed to listen on sockets: Address already in use
Jul 27 09:06:58 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Failed to listen on RPCbind Server Activation Socket.
Jul 27 09:06:58 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Unit rpcbind.socket entered failed state.
Jul 27 09:06:58 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Starting RPCbind Server Activation Socket.
Jul 27 09:15:46 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: rpcbind.socket failed to listen on sockets: Address already in use
Jul 27 09:15:46 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Failed to listen on RPCbind Server Activation Socket.
Jul 27 09:15:46 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Starting RPCbind Server Activation Socket.
Jul 27 09:15:50 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: rpcbind.socket failed to listen on sockets: Address already in use
Jul 27 09:15:50 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Failed to listen on RPCbind Server Activation Socket.
Jul 27 09:15:50 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Starting RPCbind Server Activation Socket.
sh-4.2# exit
Comment 3 Raghavendra Talur 2017-07-27 06:03:34 EDT
Karthick,

please provide the gluster-blockd, glusterd service file from the container.
Also the netstat output from container and host
netstat -tnap | grep 111
Comment 4 Mohamed Ashiq 2017-07-27 15:32:12 EDT
Hi karthick,

Me and Talur gave it a try in our setup.

Steps we followed on host are 
1) modprobe target_core_user
2) systemctl enable rpcbind
3) systemctl start rpcbind

Then it seems to be working as expected in container in our setup's.
Can you give a try again.
Comment 5 Humble Chirammal 2017-08-02 12:26:37 EDT
Once 3 acks in place, I will move to ON_QA.
Comment 11 krishnaram Karthick 2017-09-07 05:07:41 EDT
The issue reported in this bug is no more seen in build - cns-deploy-5.0.0-32.el7rhgs

closing the bug as this issue is not seen with the latest builds.

Note You need to log in before you can comment on or make changes to this bug.