Bug 1213796 - systemd integration with glusterfs
Summary: systemd integration with glusterfs
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-21 10:24 UTC by Sachidananda Urs
Modified: 2018-10-08 02:13 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-08 02:13:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Sachidananda Urs 2015-04-21 10:24:05 UTC
Need support to enable socket-based activation of glusterd...

ref: http://0pointer.de/blog/projects/socket-activation.html

When tried to activate glusterd using socket-based activation method, glusterd tries to bind to port 24007 held by systemd and throws following errors:

[2015-04-21 18:13:27.352659] I [MSGID: 100030] [glusterfsd.c:2288:main] 0-/usr/local/sbin/glusterd: Started runn
ing /usr/local/sbin/glusterd version 3.8dev (args: /usr/local/sbin/glusterd -p /var/run/glusterd.pid)
[2015-04-21 18:13:27.382292] I [glusterd.c:1282:init] 0-management: Maximum allowed open file descriptors set to
 65536
[2015-04-21 18:13:27.382345] I [glusterd.c:1327:init] 0-management: Using /var/lib/glusterd as working directory
[2015-04-21 18:13:27.392716] E [socket.c:823:__socket_server_bind] 0-socket.management: binding to  failed: Addr
ess already in use
[2015-04-21 18:13:27.392753] E [socket.c:826:__socket_server_bind] 0-socket.management: Port is already in use
[2015-04-21 18:13:27.392782] W [rpcsvc.c:1589:rpcsvc_transport_create] 0-rpc-service: listening on transport fai
led
[2015-04-21 18:13:27.392809] E [glusterd.c:1509:init] 0-management: creation of listener failed
[2015-04-21 18:13:27.392850] E [xlator.c:426:xlator_init] 0-management: Initialization of volume 'management' fa
iled, review your volfile again
[2015-04-21 18:13:27.392865] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed
[2015-04-21 18:13:27.392878] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
[2015-04-21 18:13:27.393372] W [glusterfsd.c:1212:cleanup_and_exit] (--> 0-: received signum (0), shutting down
[2015-04-21 18:13:27.429586] I [MSGID: 100030] [glusterfsd.c:2288:main] 0-/usr/local/sbin/glusterd: Started runn
ing /usr/local/sbin/glusterd version 3.8dev (args: /usr/local/sbin/glusterd -p /var/run/glusterd.pid)
[2015-04-21 18:13:27.447571] I [glusterd.c:1282:init] 0-management: Maximum allowed open file descriptors set to  65536
[2015-04-21 18:13:27.447624] I [glusterd.c:1327:init] 0-management: Using /var/lib/glusterd as working directory
[2015-04-21 18:13:27.486555] E [socket.c:823:__socket_server_bind] 0-socket.management: binding to  failed: Addr
ess already in use
[2015-04-21 18:13:27.486595] E [socket.c:826:__socket_server_bind] 0-socket.management: Port is already in use
[2015-04-21 18:13:27.486624] W [rpcsvc.c:1589:rpcsvc_transport_create] 0-rpc-service: listening on transport failed

Comment 1 Sachidananda Urs 2015-04-21 10:25:12 UTC
[root@localhost system]# cat glusterd.socket
[Unit]
Description=glusterd server socket

[Socket]
ListenStream=24007
ListenStream=/var/run/glusterd.socket
Accept=no

[Install]
WantedBy=sockets.target
[root@localhost system]# 

====================================================================

[root@localhost system]# cat glusterd.service
[Unit]
Description=glusterd - Gluster elastic volume management daemon
Documentation=man:glusterd(8)
After=network.target
Wants=network-online.target
Wants=syslog.target

[Service]
Type=forking
PIDFile=/var/run/glusterd.pid
LimitNOFILE=65536
ExecStart=/usr/local/sbin/glusterd -p /var/run/glusterd.pid
KillMode=process

[Install]
WantedBy=multi-user.target
[root@localhost system]#

Comment 2 Niels de Vos 2015-04-21 10:58:55 UTC
I am not sure if socket activation makes a lot of sense here. Gluster clients may already know what bricks are available. When a Gluster Server reboots, glusterd should start the brick processes. When socket activation is used, glusterd will only start after a connection on the socket (unix socket or tcp port) is detected. I think gluster clients can try to connect to the brick processes without touching any of the glusterd sockets.

What is the use-case you are trying to solve here?

Comment 3 Sachidananda Urs 2015-04-21 12:56:59 UTC
GlusterD would anyway start upon reboot. The usecase here is... admin/user has the privilege to enable and start glusterd.socket (on need by basis) and not be bothered about restarting glusterd (In case the process gets killed due to crash or OOM kill or by any other means)... it does get automatically started by next vol info or any other volume command. Failure does not go unnoticed, it is handled by systemd for further diagnosis.

Of course we have to document about not using this feature while using quorum.

Comment 4 Niels de Vos 2015-04-21 15:20:57 UTC
I am not (yet) convinces this is the right approach. If restarting glusterd after a failure is needed, we can add a Restart=... option to the glusterd.service file. See 'man systemd.service' for more details. Restarting on a crash/OOM might not be a good idea in general, it can well be that something is horribly wrong, and restarting does not give a guarantee that the whole system recovers and becomes usable again.

Comment 5 Sachidananda Urs 2015-04-22 04:32:05 UTC
Yep agree with the `Restart=' option. This is more of an on-demand activation. More of a good to have feature.

Comment 6 Atin Mukherjee 2018-10-07 13:08:43 UTC
Sac - I believe that this BZ is just hanging over with out any activity. In GD1, we're not looking to take this enhancement. I'd suggest to close this bug.

Comment 7 Sachidananda Urs 2018-10-08 01:38:07 UTC
(In reply to Atin Mukherjee from comment #6)
> Sac - I believe that this BZ is just hanging over with out any activity. In
> GD1, we're not looking to take this enhancement. I'd suggest to close this
> bug.

Ack! We can close this bug.


Note You need to log in before you can comment on or make changes to this bug.