RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1334170 - Cannot run daemon as non-root and tcp listener is disabled
Summary: Cannot run daemon as non-root and tcp listener is disabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: fence-virt
Version: 7.2
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Ryan McCabe
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-09 05:05 UTC by Andrew Beekhof
Modified: 2019-03-06 00:41 UTC (History)
9 users (show)

Fixed In Version: fence-virt-0.3.2-9.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 19:26:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2089 0 normal SHIPPED_LIVE fence-virt bug fix and enhancement update 2017-08-01 18:14:49 UTC

Description Andrew Beekhof 2016-05-09 05:05:52 UTC
Description of problem:

$subject.

Needed for testing openstack in pure-virt environment

Two upstream patches:
   https://github.com/ClusterLabs/fence-virt/commit/988c084
   https://github.com/ClusterLabs/fence-virt/commit/87b4eb3

Plus:

[03:05 PM] beekhof@fedora ~/Development/sources/fence-virt/rhel ☺ # git diff
diff --git a/fence-virt.spec b/fence-virt.spec
index 7baedc3..97173a6 100644
--- a/fence-virt.spec
+++ b/fence-virt.spec
@@ -52,6 +53,15 @@ Requires:    fence-virtd
 Provides multicast listener capability for fence-virtd.
 
 
+%package -n fence-virtd-tcp
+Summary:       Tcp listener for fence-virtd
+Group:         System Environment/Base
+Requires:      fence-virtd
+
+%description -n fence-virtd-tcp
+Provides tcp listener capability for fence-virtd.
+
+
 %package -n fence-virtd-serial
 Summary:       Serial VMChannel listener for fence-virtd
 Group:         System Environment/Base
@@ -78,10 +88,11 @@ machines on a desktop.
 %setup -q
 
 %patch0 -p1 -b .bz1207422
+%patch1 -p1
 
 %build
 ./autogen.sh
-%{configure} --disable-libvirt-qmf-plugin
+%{configure} --disable-libvirt-qmf-plugin --enable-tcp-plugin
 make %{?_smp_mflags}
 
 
@@ -156,6 +167,10 @@ fi
 %defattr(-,root,root,-)
 %{_libdir}/%{name}/serial.so
 
+%files -n fence-virtd-tcp
+%defattr(-,root,root,-)
+%{_libdir}/%{name}/tcp.so
+
 %files -n fence-virtd-libvirt
 %defattr(-,root,root,-)
 %{_libdir}/%{name}/libvirt.so

Comment 2 Roman Bednář 2016-05-20 15:04:43 UTC
Marking with conditional NAK. It's not clear if the tcp listener feature is new.
Git diff does not tell much and manpages do not mention such feature.

In order to provide ACK we will need the following:

1) Unit test results OR usage and config example
    - needed to plan correct testing

2) Documentation
    - if tcp listener capability is new we need to have it documented well for end users

3) Feature specification
    - ideally a use case for the feature

Comment 3 Andrew Beekhof 2016-05-23 01:37:42 UTC
(In reply to Roman Bednář from comment #2)
> Marking with conditional NAK. It's not clear if the tcp listener feature is
> new.
> Git diff does not tell much and manpages do not mention such feature.

tcp is documented in fence_virt.conf

Comment 4 Chris Feist 2016-06-08 20:40:24 UTC
Roman,

Can you take a look at the documentation for the tcp listener (it's in the fence_virt.conf man page in the fence-virtd package starting on line 118).  It also has some information on how it is configured.

Comment 5 Roman Bednář 2016-06-09 13:47:01 UTC
Ok, adding QA ack.

Comment 11 Jaroslav Kortus 2016-09-20 12:18:20 UTC
Ryan, do we have any update on this bug?

Comment 14 Andrew Beekhof 2016-10-06 03:41:07 UTC
test procedure:

wget http://download.eng.bos.redhat.com/brewroot/packages/fence-virt/0.3.2/5.el7/i686/fence-virt-{,debuginfo-}0.3.2-5.el7.i686.rpm

wget http://download.eng.bos.redhat.com/brewroot/packages/fence-virt/0.3.2/5.el7/i686/fence-virtd-{,libvirt-,multicast-,tcp-}0.3.2-5.el7.i686.rpm

yum install -y fence-*.rpm

mkdir /etc/cluster/

echo redhat > /etc/cluster/fence_xvm.key

chmod a+r /etc/cluster/fence_xvm.key

chmod a+rx /etc/cluster/

sed -i -e s/multicast/tcp/ -e s/225.0.0.12/192.168.23.1/ /etc/fence_virt.conf 

sed -i 's@$FENCE_VIRTD_ARGS@$FENCE_VIRTD_ARGS -p /tmp/fence_virtd_stack.pid@' /usr/lib/systemd/system/fence_virtd.service

# If virtual machines are run as a different user, e.g. on a tripleo-quickstart deploy of OSP9
# sed -i -e s/system/session/ /etc/fence_virt.conf 
# echo "User=stack" >> /usr/lib/systemd/system/fence_virtd.service

systemctl enable fence_virtd.service

service fence_virtd start

fence-virt -T 192.168.23.1 -o list

Comment 15 Andrew Beekhof 2016-10-06 05:49:18 UTC
Example output:

ceph_0                           76fc81e6-4fe9-4486-b03a-e8ae401c3421 on
ceph_1                           a80d43b4-e1db-4801-ba47-b6c0a35c1706 on
ceph_2                           69dbe3f2-03d6-4933-8471-b429f7d76ad1 on
compute_0                        1762c0ef-46ac-4f84-90a9-52f2af2b7dec on
control_0                        725820be-33ee-4630-9bfc-3f4124291df5 on
control_1                        92baf1f6-03be-4786-a9fa-142d3198f599 on
control_2                        af0f9f3c-a573-40d2-93b3-5ef2327274cd on
undercloud                       5c71b883-9761-4e78-bd4f-c5e8e4df1a71 on


For some reason it only worked if I ran the following as stack instead of using systemd:

   /usr/sbin/fence_virtd -w -p /tmp/fence_virtd_stack.pid -F -d99

But thats not really relevant to this bug.


Full log from the server side:

[stack@haa-08 ~]$ /usr/sbin/fence_virtd -w -p /tmp/fence_virtd_stack.pid -F -d99
Using /tmp/fence_virtd_stack.pid
Background mode disabled
Debugging threshold is now 99
backends {
	libvirt {
		uri = "qemu:///session";
	}

}

listeners {
	tcp {
		interface = "virbr0";
		address = "192.168.23.1";
		key_file = "/etc/cluster/fence_xvm.key";
	}

}

fence_virtd {
	debug = "99";
	backend = "libvirt";
	listener = "tcp";
}

Backend plugin: libvirt
Listener plugin: tcp
Searching /usr/lib/fence-virt for plugins...
Searching for plugins in /usr/lib/fence-virt
Loading plugin from /usr/lib/fence-virt/tcp.so
Failed to map backend_plugin_version
Registered listener plugin tcp 0.1
Loading plugin from /usr/lib/fence-virt/libvirt.so
Registered backend plugin libvirt 0.1
Loading plugin from /usr/lib/fence-virt/multicast.so
Failed to map backend_plugin_version
Registered listener plugin multicast 1.2
3 plugins found
Available backends:
    libvirt 0.1
Available listeners:
    tcp 0.1
    multicast 1.2
Debugging threshold is now 99
Using qemu:///session
Debugging threshold is now 99
Got /etc/cluster/fence_xvm.key for key_file
Got 192.168.23.1 for address
Reading in key file /etc/cluster/fence_xvm.key into 0x821602c (4096 max size)
Stopped reading @ 7 bytes
Actual key length = 7 bytes
ipv4_listen: Setting up ipv4 listen socket
ipv4_listen: Success; fd = 8
Accepted client...
Request 5 seqno 354639 domain 
Plain TCP request
Request 5 seqno 354639 target 
libvirt_devstatus ---
Sending response to caller...
Accepted client...
Request 5 seqno 397958 domain 
Plain TCP request
Request 5 seqno 397958 target 
libvirt_devstatus ---
Sending response to caller...
Accepted client...
Request 6 seqno 780593 domain 
Plain TCP request
Request 6 seqno 780593 target 
libvirt_hostlist
Sending 76fc81e6-4fe9-4486-b03a-e8ae401c3421
Sending a80d43b4-e1db-4801-ba47-b6c0a35c1706
Sending 69dbe3f2-03d6-4933-8471-b429f7d76ad1
Sending 1762c0ef-46ac-4f84-90a9-52f2af2b7dec
Sending 725820be-33ee-4630-9bfc-3f4124291df5
Sending 92baf1f6-03be-4786-a9fa-142d3198f599
Sending af0f9f3c-a573-40d2-93b3-5ef2327274cd
Sending 5c71b883-9761-4e78-bd4f-c5e8e4df1a71
Sending terminator packet
Sending response to caller...

Comment 16 Andrew Beekhof 2016-10-06 05:58:36 UTC
Ah, thats why:

type=AVC msg=audit(1475718963.199:5271506): avc:  denied  { search } for  pid=76426 comm="fence_virtd" name="stack" dev="dm-2" ino=1073750528 scontext=system_u:system_r:fenced_t:s0 tcontext=unconfined_u:object_r:user_home_dir_t:s0 tclass=dir
type=SYSCALL msg=audit(1475718963.199:5271506): arch=40000003 syscall=195 success=no exit=-13 a0=81a9128 a1=ffb91ecc a2=f7597000 a3=81a9128 items=0 ppid=1 pid=76426 auid=4294967295 uid=1001 gid=1001 euid=1001 suid=1001 fsuid=1001 egid=1001 sgid=1001 fsgid=1001 tty=(none) ses=4294967295 comm="fence_virtd" exe="/usr/sbin/fence_virtd" subj=system_u:system_r:fenced_t:s0 key=(null)

Comment 18 Andrew Beekhof 2017-03-14 01:33:47 UTC
That is my understanding too

Comment 23 errata-xmlrpc 2017-08-01 19:26:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2089


Note You need to log in before you can comment on or make changes to this bug.