Bug 1597888 - After RHVH upgrade selinux policy are denied resulting host in non responsive state. [NEEDINFO]
Summary: After RHVH upgrade selinux policy are denied resulting host in non responsive...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: rhev-hypervisor-ng
Version: 4.2.1
Hardware: All
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Yuval Turgeman
QA Contact: Yaning Wang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-03 20:04 UTC by Pawan kumar Vilayatkar
Modified: 2021-09-09 14:55 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-07 19:19:26 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:
sbonazzo: needinfo? (pvilayat)
mkalinin: needinfo? (pvilayat)
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-43546 0 None None None 2021-09-09 14:55:17 UTC

Description Pawan kumar Vilayatkar 2018-07-03 20:04:06 UTC
Description of problem:

After upgrading RHVH redhat-virtualization-host-image-update-placeholder-4.1-11.0.el7.noarch to redhat-virtualization-host-image-update-4.2-20180622.0.el7_5.noarch successfully the selinux policy on some groups are denied resulting in host non-responsive state .


Version-Release number of selected component (if applicable):
redhat-virtualization-host-image-update-placeholder-4.1-11.0.el7.noarch


Looking at the message on host after upgrade host reboot and we can see below 
message "type=1404 audit(1530123409.199:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295"

Comment 2 Pawan kumar Vilayatkar 2018-07-03 20:07:48 UTC
Hello,

Sorry I just forget to mention after running "restorecon /etc/group" and rebooting corrects the issue and the hypervisor becomes available again.

Comment 3 Ryan Barry 2018-07-04 10:19:51 UTC
Please mount the LV for the old layer and check the context on group

This looks like a mislabeled file instead of a bad policy. RHVH copies contexts between layers

mkdir /tmp/mnt
mount /dev/rhvh/*4.1*+1 /tmp/mnt
ls -lZ /tmp/mnt/etc/group

Comment 4 Pawan kumar Vilayatkar 2018-07-04 17:27:13 UTC
(In reply to Ryan Barry from comment #3)
> Please mount the LV for the old layer and check the context on group
> 
> This looks like a mislabeled file instead of a bad policy. RHVH copies
> contexts between layers
> 
> mkdir /tmp/mnt
> mount /dev/rhvh/*4.1*+1 /tmp/mnt
> ls -lZ /tmp/mnt/etc/group


It seem old layer is does not exist below is the output

# mkdir /tmp/mnt
# mount /dev/rhvh/*4.1*+1 /tmp/mnt
mount: special device /dev/rhvh/*4.1*+1 does not exist
# ll /dev/rhvh/
total 0
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 home -> ../dm-22
lrwxrwxrwx. 1 root root 7 Jun 28 13:38 rhvh-4.2.4.3-0.20180622.0+1 -> ../dm-6
lrwxrwxrwx. 1 root root 7 Jun 28 13:38 swap -> ../dm-8
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 tmp -> ../dm-21
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var -> ../dm-20
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_crash -> ../dm-25
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_log -> ../dm-19
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_log_audit -> ../dm-18
# mount /dev/rhvh/*4.2*+1 /tmp/mnt
# ls -lZ /tmp/mnt/etc/group 
-rw-r--r--. root root unconfined_u:object_r:passwd_file_t:s0 /tmp/mnt/etc/group

Comment 9 Ryan Barry 2018-07-17 09:04:11 UTC
I can't reproduce this, and the context on the original file is ok. We'll push a patch to double-check this.


Note You need to log in before you can comment on or make changes to this bug.