RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1241095 - [SELinux]: CTDB node goes to DISCONNECTED/BANNED state when multiple nodes are rebooted (RHEL-7)
Summary: [SELinux]: CTDB node goes to DISCONNECTED/BANNED state when multiple nodes ar...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.1
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On: 1224879 1236980
Blocks: 1202842 1212796 1248655
TreeView+ depends on / blocked
 
Reported: 2015-07-08 12:32 UTC by Prasanth
Modified: 2015-11-19 10:39 UTC (History)
17 users (show)

Fixed In Version: selinux-policy-3.13.1-33.el7
Doc Type: Bug Fix
Doc Text:
After multiple CTDB cluster nodes were rebooted one after another while I/O from a Windows client was set, the status of the cluster was incorrectly displayed as UNHEALTHY and the status of the nodes as BANNED or DISCONNECTED. With this update, the related SELinux policy no longer prevents signal transmission between the CTDB cluster and certain Samba processes. As a result, the status of the cluster and the nodes displays properly in the above situation.
Clone Of: 1236980
: 1248655 (view as bug list)
Environment:
Last Closed: 2015-11-19 10:39:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2300 0 normal SHIPPED_LIVE selinux-policy bug fix update 2015-11-19 09:55:26 UTC

Description Prasanth 2015-07-08 12:32:04 UTC
+++ This bug was initially created as a clone of Bug #1236980 +++

Description of problem:

CTDB cluster doesn't come to healthy state when multiple nodes are rebooted one after the other and I/O's are running from windows client.

1st time:
**************
Out of 4 node CTDB cluster, when rebooted two nodes one after the other, the node comes back and remains in UNHEALTHY state and two other nodes goes to BANNING state.

2nd time:
************
Out of 4 node CTDB cluster, when rebooted two nodes one after the other, the node comes back and remains in UNHEALTHY state and two other nodes goes to DISCONNECTED state.

It happens even without running the I/O's.

Version-Release number of selected component (if applicable):
ctdb2.5-2.5.5-2.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1.Create a CTDB setup
2. Mount volume using VIP 
3. start i/o's from windows client
4. reboot node 1, check ctdb status 
5. reboot node 3 , check ctdb status
6. wait for both nodes to come up, check ctdb status
7. ctdb status shows the nodes in UNHEALTHY/DISCONNECTED state.
8. In one scenario node goes to banned state.

Actual results:
CTDB cluster UNHEALTHY.
Node goes to banned/DISCONNECTED state 

Expected results:

Once all the nodes come up, the cluster should be up and all nodes should be in OK state.

Additional info:

When test was run in SELinux enforcing mode, there were AVC's related to ctdb and iptables.
type=AVC msg=audit(06/30/2015 01:25:33.897:367) : avc:  denied  { read } for  pid=4431 comm=iptables path=/var/lib/ctdb/iptables-ctdb.flock dev="dm-0" ino=67681652 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:ctdbd_var_lib_t:s0 tclass=file 

Switched the SELinux to permissive mode.
Still cluster not coming to healthy state.

Will provide the sosreports.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-06-30 02:21:24 EDT ---

This bug is automatically being proposed for Red Hat Gluster Storage 3.1.0 by setting the release flag 'rhgs‑3.1.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from surabhi on 2015-06-30 02:42:19 EDT ---

sosreports from all the nodes updated  at 
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1236980/

Tried to reproduce the issue on RHEL6.7 with two node ctdb cluster.
The issue is seen there as well.When one node is rebooted, the cluster doesn't come back to OK state and one of the node remains in DISCONNECTED/UNHEALTHY state.

--- Additional comment from RHEL Product and Program Management on 2015-07-03 02:23:04 EDT ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases,
it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from surabhi on 2015-07-03 04:53:22 EDT ---

Even with the new build CTDB2.5.5-3 , the nodes are not coming to healthy state. after reboot.

Seeing following AVC's when a system is rebooted and trying to failback.
 type=AVC msg=audit(07/03/2015 01:30:25.839:154) : avc:  denied  { block_suspend } for  pid=31332 comm=smbd capability=block_suspend  scontext=system_u:system_r:smbd_t:s0 tcontext=system_u:system_r:smbd_t:s0 tclass=capability2

--- Additional comment from surabhi on 2015-07-03 04:59:19 EDT ---

Worked with smb-dev and SELinux team to root cause this and seems like SELinux issue.
The fix has to come in the next build of Selinux for RHEL7.1.
The SELinux bz for RHEL7.1 is https://bugzilla.redhat.com/show_bug.cgi?id=1224879

--- Additional comment from Rejy M Cyriac on 2015-07-04 05:49:45 EDT ---

Accepted as Blocker for RHGS 3.1 at RHGS 3.1 Blocker BZ Status Check meeting on 03 July 2015

pm_ack being provided as per decision at meeting

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-07-04 05:50:22 EDT ---

Since this bug has been approved for the Red Hat Gluster Storage 3.1.0 release, through release flag 'rhgs-3.1.0+', the Target Release is being automatically set to 'RHGS 3.1.0'

--- Additional comment from surabhi on 2015-07-05 14:06:06 EDT ---


type=AVC msg=audit(06/25/2015 06:19:22.207:22288) : avc:  denied  { signull } for  pid=15386 comm=ctdbd scontext=system_u:system_r:ctdbd_t:s0 tcontext=system_u:system_r:smbd_t:s0 tclass=process 
----

type=AVC msg=audit(06/25/2015 06:19:32.566:22290) : avc:  denied  { read } for  pid=16754 comm=iptables path=/var/lib/ctdb/iptables-ctdb.flock dev="dm-0" ino=67681652 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:ctdbd_var_lib_t:s0 tclass=file

type=AVC msg=audit(07/03/2015 01:30:25.839:154) : avc:  denied  { block_suspend } for  pid=31332 comm=smbd capability=block_suspend  scontext=system_u:system_r:smbd_t:s0 tcontext=system_u:system_r:smbd_t:s0 tclass=capability2 

type=AVC msg=audit(1435939596.446:240): avc:  denied  { signull } for  pid=1097 comm="ctdbd" scontext=system_u:system_r:ctdbd_t:s0 tcontext=unconfined_u:unconfined_r:samba_unconfined_net_t:s0-s0:c0.c1023 tclass=process

**************************************************************

All above AVC's are fixed with temporary module provided in BZ 1224879.

***************************************************************
Only following AVC is seen.


type=AVC msg=audit(07/05/2015 13:01:27.621:709) : avc:  denied  { signull } for  pid=29125 comm=ctdbd scontext=system_u:system_r:ctdbd_t:s0 tcontext=system_u:system_r:winbind_t:s0 tclass=process

--- Additional comment from Milos Malik on 2015-07-07 07:52:55 EDT ---

Non-beaker task form of local policy follows:

# cat bz1236980.te
policy_module(bz1236980,1.0)

require {
  type smbd_t;
  type ctdbd_t;
  type winbind_t;
  type samba_unconfined_net_t;
  class capability2 { block_suspend };
  class process { signull };
}

allow smbd_t smbd_t : capability2 { block_suspend };
allow ctdbd_t samba_unconfined_net_t : process { signull };
allow ctdbd_t winbind_t : process { signull };

# make -f /usr/share/selinux/devel/Makefile 
Compiling targeted bz1236980 module
/usr/bin/checkmodule:  loading policy configuration from tmp/bz1236980.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 17) to tmp/bz1236980.mod
Creating targeted bz1236980.pp policy package
rm tmp/bz1236980.mod tmp/bz1236980.mod.fc
# semodule -i bz1236980.pp 
#

--- Additional comment from Milos Malik on 2015-07-07 08:14:48 EDT ---

Here is a beaker task which provides the same local policy as comment#9. You can prepend it to the list of your beaker tasks:

--task "! yum -y install selinux-policy-devel policycoreutils-devel ; echo -en 'policy_module(bz1236980,1.0)\n\nrequire {\n  type smbd_t;\n  type ctdbd_t;\n  type winbind_t;\n  type samba_unconfined_net_t;\n  class capability2 { block_suspend };\n  class process { signull };\n}\n\nallow smbd_t smbd_t : capability2 { block_suspend };\nallow ctdbd_t samba_unconfined_net_t : process { signull };\nallow ctdbd_t winbind_t : process { signull };\n\n' > bz1236980.te ; make -f /usr/share/selinux/devel/Makefile ; semodule -i bz1236980.pp ; semodule -l | grep bz1236980"

Comment 2 Lukas Vrabec 2015-07-16 15:15:54 UTC
What's the state of this bug? Any AVCs?

Comment 8 errata-xmlrpc 2015-11-19 10:39:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2300.html


Note You need to log in before you can comment on or make changes to this bug.