RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1719083 - VM with "nvdimm" memory will not start successfully when Selinux is enabled
Summary: VM with "nvdimm" memory will not start successfully when Selinux is enabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: selinux-policy
Version: ---
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 8.1
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-11 02:57 UTC by jiyan
Modified: 2020-11-14 12:33 UTC (History)
13 users (show)

Fixed In Version: selinux-policy-3.14.3-12.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-05 22:11:44 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3547 0 None None None 2019-11-05 22:12:01 UTC

Description jiyan 2019-06-11 02:57:46 UTC
Description of problem:
VM with "nvdimm" memory will not start successfully  when Selinux is enabled


Version-Release number of selected component (if applicable):
kernel-4.18.0-80.el8.x86_64
libvirt-5.0.0-9.module+el8.0.1+3240+dc659f51.x86_64
qemu-kvm-3.1.0-27.module+el8.0.1+3253+c5371cb3.x86_64


How reproducible:
100%


Steps to Reproduce:
1. Prepare a shutdown VM with the following conf
# virsh domstate vmq35_801
shut off

# virsh dumpxml vmq35_801
...
  <cpu mode='custom' match='exact' check='partial'>
    ...
    <numa>
      <cell id='0' cpus='0-3' memory='512000' unit='KiB' discard='yes'/>
    </numa>
  </cpu>
...
    <memory model='nvdimm' access='shared'>
      <source>
        <path>/tmp/nvdimm</path>
      </source>
      <target>
        <size unit='KiB'>523264</size>
        <node>0</node>
      </target>
      <address type='dimm' slot='0'/>
    </memory>

# truncate -s 512M /tmp/nvdimm

2. Check selinux and /tmp/nvdimm; then start VM
# getenforce 
Enforcing

# ll -alZ /tmp/nvdimm 
-rw-r--r--. 1 root root system_u:object_r:nfs_t:s0 536870912 Jun 10 22:39 /tmp/nvdimm

# virsh start vmq35_801
error: Failed to start domain vmq35_801
error: internal error: process exited while connecting to monitor: 2019-06-11T02:50:09.651486Z qemu-kvm: -object memory-backend-file,id=memnvdimm0,prealloc=yes,mem-path=/tmp/nvdimm,share=yes,size=536870912,host-nodes=0,policy=bind: unable to map backing store for guest RAM: Permission denied

# cat /var/log/audit/audit.log  |grep nvdimm
type=AVC msg=audit(1560221579.207:2556): avc:  denied  { map } for  pid=22727 comm="qemu-kvm" path="/tmp/nvdimm" dev="0:46" ino=61164750 scontext=system_u:system_r:svirt_t:s0:c86,c380 tcontext=system_u:object_r:nfs_t:s0 tclass=file permissive=0
type=SYSCALL msg=audit(1560221579.207:2556): arch=c000003e syscall=9 success=no exit=-13 a0=7f5d549ff000 a1=20000000 a2=3 a3=11 items=0 ppid=1 pid=22727 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c86,c380 key=(null)ARCH=x86_64 SYSCALL=mmap AUID="unset" UID="qemu" GID="qemu" EUID="qemu" SUID="qemu" FSUID="qemu" EGID="qemu" SGID="qemu" FSGID="qemu"

3. Close selinux and then start VM
# setenforce 0

# getenforce 
Permissive

# ll -alZ /tmp/nvdimm 
-rw-r--r--. 1 root root system_u:object_r:nfs_t:s0 536870912 Jun 10 22:39 /tmp/nvdimm

# virsh start vmq35_801
Domain vmq35_801 started

# ll -alZ /tmp/nvdimm 
-rw-r--r--. 1 qemu qemu system_u:object_r:nfs_t:s0 536870912 Jun 10 22:51 /tmp/nvdimm

# cat /var/log/audit/audit.log  |grep nvdimm
type=AVC msg=audit(1560221658.585:2588): avc:  denied  { map } for  pid=22815 comm="qemu-kvm" path="/tmp/nvdimm" dev="0:46" ino=61164750 scontext=system_u:system_r:svirt_t:s0:c280,c457 tcontext=system_u:object_r:nfs_t:s0 tclass=file permissive=1
type=SYSCALL msg=audit(1560221658.585:2588): arch=c000003e syscall=9 success=yes exit=139743207616512 a0=7f18803ff000 a1=20000000 a2=3 a3=11 items=0 ppid=1 pid=22815 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c280,c457 key=(null)ARCH=x86_64 SYSCALL=mmap AUID="unset" UID="qemu" GID="qemu" EUID="qemu" SUID="qemu" FSUID="qemu" EGID="qemu" SGID="qemu" FSGID="qemu"

Actual results:
As step-2 and step-3 shows

Expected results:
Vm should start successfully with selinux enabled

Additional info:
In RHEL-7.7, can not reproduce this issue.

# getenforce 
Enforcing

# virsh domstate vmq35_772
shut off

# virsh dumpxml vmq35_772 |grep "<memory" -A10
--
    <memory model='nvdimm' access='shared'>
      <source>
        <path>/tmp/nvdimm</path>
      </source>
      <target>
        <size unit='KiB'>523264</size>
        <node>0</node>
      </target>
      <address type='dimm' slot='0'/>
    </memory>

# ll /tmp/nvdimm -alZ
-rw-r--r--. root root system_u:object_r:nfs_t:s0       /tmp/nvdimm

# virsh start vmq35_772
Domain vmq35_772 started

# ll /tmp/nvdimm -alZ
-rw-r--r--. qemu qemu system_u:object_r:nfs_t:s0       /tmp/nvdimm

Comment 1 Han Han 2019-06-11 04:30:30 UTC
The selinux label type of /tmp/nvdimm is nfs_t. It's very strange. Is your /tmp are mounted nfs?
Could you  please try `setsebool virt_use_nfs 1` and then try it again? This selinux boolean is to allow virt to use nfs.

Comment 2 jiyan 2019-06-11 04:57:54 UTC
Hi hanhan The virt_use_nfs is set to on already.

# getenforce 
Enforcing

# getsebool -a |grep virt_use_nfs
virt_use_nfs --> on

# virsh start vmq35_801
error: Failed to start domain vmq35_801
error: internal error: process exited while connecting to monitor: 2019-06-11T04:54:42.602774Z qemu-kvm: -object memory-backend-file,id=memnvdimm0,prealloc=yes,mem-path=/tmp/nvdimm,share=yes,size=536870912,host-nodes=0,policy=bind: unable to map backing store for guest RAM: Permission denied


And I forgot I mount nfs to /tmp, I have changed the nvdimm file to /mnt, then it can succeed.

# getenforce 
Enforcing

# ll -alZ /mnt/nvdimm 
-rw-r--r--. 1 root root system_u:object_r:default_t:s0 536870912 Jun 11 00:52 /mnt/nvdimm

# virsh dumpxml vmq35_801 |grep mnt
        <path>/mnt/nvdimm</path>

# virsh start vmq35_801
Domain vmq35_801 started

# ll -alZ /mnt/nvdimm 
-rw-r--r--. 1 qemu qemu system_u:object_r:svirt_image_t:s0:c346,c509 536870912 Jun 11 00:55 /mnt/nvdimm

Comment 3 jiyan 2019-06-11 04:58:59 UTC
By the way, for the description in bug, The RHEL-7 also tested on /tmp, which is also used to mount NFS storage. and virt_use_nfs is set to on also.

Comment 23 errata-xmlrpc 2019-11-05 22:11:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3547


Note You need to log in before you can comment on or make changes to this bug.