Bug 1413473 - Volume start fails- glusterd complains 'Unable to kill <service>: Permission denied'
Summary: Volume start fails- glusterd complains 'Unable to kill <service>: Permission ...
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Kaushal
QA Contact: Byreddy
Depends On:
TreeView+ depends on / blocked
Reported: 2017-01-16 05:22 UTC by Sweta Anandpara
Modified: 2017-10-17 09:59 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2017-10-17 09:59:55 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Sweta Anandpara 2017-01-16 05:22:31 UTC
Description of problem:
Had 4 node cluster (RHEL6) with 3.1.3 build, in RHGS-C. Had 3 volumes, 2 distribute and 1 replica. Followed the rolling upgrade process:
* Moved the nodes to maintenance on RHGS-C
* Stopped distribute volumes
* Stopped glusterd on one of the replica and updated
* Started glusterd and waited for heal info (on replica volume) to complete
* Stopped glusterd on the other replica pair and updated
* Started glusterd , and started distribute volumes

This was when, volume start of distribute volumes failed with the error: 'Unable to commit on one of the peer nodes'
The volumes ended up in an inconsistent state when one of the nodes showed the volume as started and the other nodes showed the same to be stopped.
'Volume start force' also resulted in similar errors. 

Glusterd logs showed:
[2017-01-13 05:50:22.154466] E [MSGID: 106014] [glusterd-proc-mgmt.c:101:glusterd_proc_stop] 0-management: Unable to kill nfs service, reason:Permission denied [Permission denied]

Audit logs showed: 
type=AVC msg=audit(1484286622.152:32400): avc:  denied  { sigkill } for  pid=1962 comm="glusterd" scontext=unconfined_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:virtd_t:s0-s0:c0.c1023 tclass=process

Kaushal's analysis below:
type=AVC msg=audit(1484286817.688:34603): avc:  denied  { sigkill }
for  pid=14358 comm="glusterd"
tcontext=system_u:system_r:virtd_t:s0-s0:c0.c1023 tclass=process
There are several but this is just one appearance. This message corresponds to the around 11:23 today (5:53 UTC in gluster logs, 1484286817 is unix time).

The AVC denial message reads, 'Pid 14358, glusterd, was denied its attempt to send a sigkill to the target, as it context (scontext=unconfined_u:system_r:glusterd_t:s0) is different from the target context (tcontext=system_u:system_r:virtd_t:s0-s0:c0.c1023)'

So, we know why we had EPERM. Selinux did it.

What we need to figure out, is why the contexts were different in the first place. For this, I have a hypothesis.

Sweta originally created the cluster with RHSC. This means that glusterd was launched by vdsm. And as this is a RHEL6 system, vdsm did a 'service glusterd start' which just runs the init script.

By default child processes get the selinux context of the parent. Vdsm's selinux context is applied to the init script, which applies it to glusterd, which applied it to the other glusterfsd daemons. So the gluster daemons are now running under the vdsm context. (This wouldn't happen on a RHEL7 system. Systemd would ensure that services started with `systemctl` get their correct contexts).

Even for a RHSC managed cluster, upgrades are manual. So, Sweta followed the manual update process (kill everything, upgrade, restart). This time glusterd was started by running `service start glusterd`. So it got it's default context, which is different from the virt context earlier. (Shell doesn't have a context or an unconfined context. A process launched from an unconfined process, takes the context from the application file).

If by chance, the glusterfsd daemons weren't killed during upgrade on, (Sweta assures me that they were) then the unconfined glusterd process would try to kill the running and confined in a different context glusterfsd processes. This is not allowed by selinux.

If the processes were actually killed, then the whole hypothesis fails. Also, all of my selinux explanations may have been just made up by me on the fly, but I'm pretty sure the it works in a similar manner.

Version-Release number of selected component (if applicable):
Faced this issue in 3.1.3 build (3.7.9-12). glusterd restart fixed it.
Hit this again after upgrade to 3.2 interim build (3.8.4-11)

How reproducible:
Have seen it twice

Steps to Reproduce:
1. Create a cluster through RHSC in 3.1.3
2. Upgrade the cluster following all the right steps to 3.2
3. Start pre-existing volume, or try to start a newly created volume.

Additional info:
Sosreports, audit logs will be copied to http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/<bugnumber>/

Comment 2 Atin Mukherjee 2017-01-16 05:29:43 UTC
Assigning it to Kaushal as he has the detailed analysis for this BZ with him.

Comment 3 Sweta Anandpara 2017-01-16 05:38:30 UTC
[qe@rhsqe-repo 1413473]$ hostname
[qe@rhsqe-repo 1413473]$ pwd
[qe@rhsqe-repo 1413473]$ 
[qe@rhsqe-repo 1413473]$ ls -l
total 253292
drwxr-xr-x. 2 qe qe     4096 Jan 16 10:58 audit_logs_dhcp35-100
drwxr-xr-x. 2 qe qe     4096 Jan 16 10:58 audit_logs_dhcp35-101
drwxr-xr-x. 2 qe qe     4096 Jan 16 10:58 audit_logs_dhcp35-104
drwxr-xr-x. 2 qe qe     4096 Jan 16 10:58 audit_logs_dhcp35-115
-rwxr-xr-x. 1 qe qe 68239808 Jan 16 10:55 sosreport-dhcp35-100-sysreg-prod-20170116104839.tar.xz
-rwxr-xr-x. 1 qe qe 68276816 Jan 16 10:56 sosreport-dhcp35-101.lab.eng.blr.redhat.com-20170116104848.tar.xz
-rwxr-xr-x. 1 qe qe 52046492 Jan 16 10:55 sosreport-dhcp35-104.lab.eng.blr.redhat.com-20170116104856.tar.xz
-rwxr-xr-x. 1 qe qe 70780856 Jan 16 10:55 sosreport-dhcp35-115_sysreg-prod-20170116104831.tar.xz
[qe@rhsqe-repo 1413473]$

Note You need to log in before you can comment on or make changes to this bug.