Bug 1446254

Summary: libqb does not support filesystem sockets for IPC on Linux
Product: Red Hat Enterprise Linux 7 Reporter: Christine Caulfield <ccaulfie>
Component: libqbAssignee: Christine Caulfield <ccaulfie>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.3CC: ccaulfie, cfeist, cluster-maint, jfriesse, jpokorny, kgaillot, mjuricek, ushkalim
Target Milestone: rcFlags: jpokorny: needinfo?
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libqb-1.0.1-3.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-01 18:00:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Christine Caulfield 2017-04-27 14:13:36 UTC
Description of problem:

libqb currently uses abstract sockets for Inter-Process Communication on Linux - and filesystem sockets on other platforms that don't support abstract sockets.

For use inside containers (where abstract socket names are shared with other containers and the host) this causes name clashes that prevent multiple containers running the same daemons.

Version-Release number of selected component (if applicable):
all Linux versions of libqb

How reproducible:
easily.

Steps to Reproduce:
1. Try to start up 2 containers each running corosync

Actual results:
The second container will fail to bind to the first abstract IPC socket and quit.

Expected results:
Both containers start cleanly and use their own sockets for intra-container communication.

Additional info:

Comment 2 Christine Caulfield 2017-04-28 15:15:05 UTC
  Branch: refs/heads/master
  Home:   https://github.com/ClusterLabs/libqb
  Commit: 41a24a3df7f894ceb0d66824b52c08e0365c6fc1
      https://github.com/ClusterLabs/libqb/commit/41a24a3df7f894ceb0d66824b52c08e0365c6fc1
  Author: Chrissie Caulfield <ccaulfie>
  Date:   2017-04-28 (Fri, 28 Apr 2017)

  Changed paths:
    M configure.ac
    M docs/mainpage.h
    M lib/ipc_int.h
    M lib/ipc_setup.c
    M lib/ipc_socket.c
    M tests/check_ipc.c

Comment 3 Jan Pokorný [poki] 2017-05-04 13:20:44 UTC
In re reproducer:

As containers are often used to run parallel instances, corosync is not
really the best example (running with realtime scheduling priority...)
and furthermore, for a production use, creating a pseudo-cluster within
containers is completely superfluous and waste of resources, so putting
corosync and containers into a single sentence should be frowned upon.

This may be a bit better:

0. have a top-level machine + container within
1. in both:
   # yum install pacemaker
2. in a top-level:
   # /usr/libexec/pacemaker/lrmd
>  [daemonized and still runs]
3. in a container:
   # /usr/libexec/pacemaker/lrmd
   # tail /var/log/pacemaker.log
>  May 04 15:04:38 [4244] f26       lrmd:     info: crm_log_init:       Changed active directory to /var/lib/pacemaker/cores
>  May 04 15:04:38 [4244] f26       lrmd:     info: qb_ipcs_us_publish: server name: lrmd
>  May 04 15:04:38 [4244] f26       lrmd:     info: main:       Starting
>  May 04 15:07:00 [4249] f26       lrmd:     info: crm_log_init:       Changed active directory to /var/lib/pacemaker/cores
>  May 04 15:07:00 [4249] f26       lrmd:     info: qb_ipcs_us_publish: server name: lrmd
>  May 04 15:07:00 [4249] f26       lrmd:    error: qb_ipcs_us_publish: Could not bind AF_UNIX (): Address already in use (98)
   ^ this line is clear on what's wrong
>  May 04 15:07:00 [4249] f26       lrmd:     info: qb_ipcs_us_withdraw:        withdrawing server sockets
>  May 04 15:07:00 [4249] f26       lrmd:    error: mainloop_add_ipc_server:    Could not start lrmd IPC server: Address already in use (-98)
>  May 04 15:07:00 [4249] f26       lrmd:    error: main:       Failed to create IPC server: shutting down and inhibiting respawn
>  May 04 15:07:00 [4249] f26       lrmd:     info: crm_xml_cleanup:    Cleaning up memory from libxml2

Comment 4 Jan Pokorný [poki] 2017-05-04 13:23:48 UTC
With the fix and when the step 3. is modified as follows:

3. in a container:
   # touch /etc/libqb/force-filesystem-sockets
   # /usr/libexec/pacemaker/lrmd
 
the lrmd process should keep running in the container,
just as it keeps running outside.

Comment 6 Jan Pokorný [poki] 2017-05-17 08:26:50 UTC
This enforcing feature raises the importance of having memleaks
identified a while back finally fixed at the same time the feature
is rolled out:

https://github.com/ClusterLabs/libqb/pull/194

(Perhaps as the whole new upstream release + rebase?)

Comment 14 errata-xmlrpc 2017-08-01 18:00:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1896