Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1597888

Summary: After RHVH upgrade selinux policy are denied resulting host in non responsive state.
Product: Red Hat Enterprise Virtualization Manager Reporter: Pawan kumar Vilayatkar <pvilayat>
Component: rhev-hypervisor-ngAssignee: Yuval Turgeman <yturgema>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Yaning Wang <yaniwang>
Severity: high Docs Contact:
Priority: medium    
Version: 4.2.1CC: cshao, dfediuck, huzhao, lsurette, michal.skrivanek, mkalinin, pstehlik, pvilayat, qiyuan, rbarry, sirao, srevivo, weiwang, yaniwang, ycui, yturgema, yzhao
Target Milestone: ---Flags: lsvaty: testing_plan_complete-
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-07 19:19:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Node RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pawan kumar Vilayatkar 2018-07-03 20:04:06 UTC
Description of problem:

After upgrading RHVH redhat-virtualization-host-image-update-placeholder-4.1-11.0.el7.noarch to redhat-virtualization-host-image-update-4.2-20180622.0.el7_5.noarch successfully the selinux policy on some groups are denied resulting in host non-responsive state .


Version-Release number of selected component (if applicable):
redhat-virtualization-host-image-update-placeholder-4.1-11.0.el7.noarch


Looking at the message on host after upgrade host reboot and we can see below 
message "type=1404 audit(1530123409.199:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295"

Comment 2 Pawan kumar Vilayatkar 2018-07-03 20:07:48 UTC
Hello,

Sorry I just forget to mention after running "restorecon /etc/group" and rebooting corrects the issue and the hypervisor becomes available again.

Comment 3 Ryan Barry 2018-07-04 10:19:51 UTC
Please mount the LV for the old layer and check the context on group

This looks like a mislabeled file instead of a bad policy. RHVH copies contexts between layers

mkdir /tmp/mnt
mount /dev/rhvh/*4.1*+1 /tmp/mnt
ls -lZ /tmp/mnt/etc/group

Comment 4 Pawan kumar Vilayatkar 2018-07-04 17:27:13 UTC
(In reply to Ryan Barry from comment #3)
> Please mount the LV for the old layer and check the context on group
> 
> This looks like a mislabeled file instead of a bad policy. RHVH copies
> contexts between layers
> 
> mkdir /tmp/mnt
> mount /dev/rhvh/*4.1*+1 /tmp/mnt
> ls -lZ /tmp/mnt/etc/group


It seem old layer is does not exist below is the output

# mkdir /tmp/mnt
# mount /dev/rhvh/*4.1*+1 /tmp/mnt
mount: special device /dev/rhvh/*4.1*+1 does not exist
# ll /dev/rhvh/
total 0
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 home -> ../dm-22
lrwxrwxrwx. 1 root root 7 Jun 28 13:38 rhvh-4.2.4.3-0.20180622.0+1 -> ../dm-6
lrwxrwxrwx. 1 root root 7 Jun 28 13:38 swap -> ../dm-8
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 tmp -> ../dm-21
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var -> ../dm-20
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_crash -> ../dm-25
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_log -> ../dm-19
lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_log_audit -> ../dm-18
# mount /dev/rhvh/*4.2*+1 /tmp/mnt
# ls -lZ /tmp/mnt/etc/group 
-rw-r--r--. root root unconfined_u:object_r:passwd_file_t:s0 /tmp/mnt/etc/group

Comment 9 Ryan Barry 2018-07-17 09:04:11 UTC
I can't reproduce this, and the context on the original file is ok. We'll push a patch to double-check this.

Comment 22 Red Hat Bugzilla 2023-09-15 00:10:28 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days