Bug 1597888
| Summary: | After RHVH upgrade selinux policy are denied resulting host in non responsive state. | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Pawan kumar Vilayatkar <pvilayat> |
| Component: | rhev-hypervisor-ng | Assignee: | Yuval Turgeman <yturgema> |
| Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Yaning Wang <yaniwang> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 4.2.1 | CC: | cshao, dfediuck, huzhao, lsurette, michal.skrivanek, mkalinin, pstehlik, pvilayat, qiyuan, rbarry, sirao, srevivo, weiwang, yaniwang, ycui, yturgema, yzhao |
| Target Milestone: | --- | Flags: | lsvaty:
testing_plan_complete-
|
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-09-07 19:19:26 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Pawan kumar Vilayatkar
2018-07-03 20:04:06 UTC
Hello, Sorry I just forget to mention after running "restorecon /etc/group" and rebooting corrects the issue and the hypervisor becomes available again. Please mount the LV for the old layer and check the context on group This looks like a mislabeled file instead of a bad policy. RHVH copies contexts between layers mkdir /tmp/mnt mount /dev/rhvh/*4.1*+1 /tmp/mnt ls -lZ /tmp/mnt/etc/group (In reply to Ryan Barry from comment #3) > Please mount the LV for the old layer and check the context on group > > This looks like a mislabeled file instead of a bad policy. RHVH copies > contexts between layers > > mkdir /tmp/mnt > mount /dev/rhvh/*4.1*+1 /tmp/mnt > ls -lZ /tmp/mnt/etc/group It seem old layer is does not exist below is the output # mkdir /tmp/mnt # mount /dev/rhvh/*4.1*+1 /tmp/mnt mount: special device /dev/rhvh/*4.1*+1 does not exist # ll /dev/rhvh/ total 0 lrwxrwxrwx. 1 root root 8 Jun 28 13:38 home -> ../dm-22 lrwxrwxrwx. 1 root root 7 Jun 28 13:38 rhvh-4.2.4.3-0.20180622.0+1 -> ../dm-6 lrwxrwxrwx. 1 root root 7 Jun 28 13:38 swap -> ../dm-8 lrwxrwxrwx. 1 root root 8 Jun 28 13:38 tmp -> ../dm-21 lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var -> ../dm-20 lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_crash -> ../dm-25 lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_log -> ../dm-19 lrwxrwxrwx. 1 root root 8 Jun 28 13:38 var_log_audit -> ../dm-18 # mount /dev/rhvh/*4.2*+1 /tmp/mnt # ls -lZ /tmp/mnt/etc/group -rw-r--r--. root root unconfined_u:object_r:passwd_file_t:s0 /tmp/mnt/etc/group I can't reproduce this, and the context on the original file is ok. We'll push a patch to double-check this. The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days |