RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 869053 - per-VM DAC labeling needs to impact how root-squashed NFS files are opened
Summary: per-VM DAC labeling needs to impact how root-squashed NFS files are opened
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Martin Kletzander
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 822589
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-22 21:34 UTC by Eric Blake
Modified: 2016-04-26 13:52 UTC (History)
9 users (show)

Fixed In Version: libvirt-0.10.2-31.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-14 04:14:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1374 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2014-10-14 08:11:54 UTC

Description Eric Blake 2012-10-22 21:34:01 UTC
Description of problem:
Right now, several functions in libvirt (such as qemu_driver.c:qemuOpenFile) hard-code to attempts to open with the current uid/gid, then the driver uid/gid from qemu.conf, where the fallback works even when root-squash NFS prevent root from opening the file.  But now that you can set <seclable model='dac'> in the XML to provide which uid/gid the qemu process will run under, we need to open files using the uid/gid that qemu will use for that VM, rather than the driver default.

Version-Release number of selected component (if applicable):
libvirt-0.10.2-4.el6

How reproducible:
100%

Steps to Reproduce:
1. found this by code inspection, but I suspect that testing it will involve setting up a disk image in a root-squash NFS server, as well as turning on a per-VM <seclabel model='dac'> override to something different than the user/group specified in qemu.conf.
2.
3.
  
Actual results:
Blindly using qemu.conf uid/gid can lead to situations where we fail to open a file that the qemu can use, or where we open a file in spite of qemu not being able to use it.

Expected results:
libvirt should honor per-VM DAC settings when opening files for that VM.


Additional info:
See also this upstream thread on what needs to happen:
https://www.redhat.com/archives/libvir-list/2012-October/msg01202.html

Comment 3 Wayne Sun 2012-11-06 08:12:59 UTC
pkgs:
# rpm -q libvirt qemu-kvm kernel
libvirt-0.10.2-7.el6.x86_64
qemu-kvm-0.12.1.2-2.316.el6.x86_64
kernel-2.6.32-330.el6.x86_64

steps:
prepare:
1. prepare a domain with img on root_squash nfs
# mount -o vers=3 $nfs_server:/export /var/lib/libvirt/images/

# ll /var/lib/libvirt/images/
total 4544356
-rw-r--r--. 1 qemu qemu 4649975808 Jun 15 18:42 qcow2.img

set dynamic_ownership = 0 in qemu.conf

# service libvirtd restart

# virsh start libvirt_test_api
Domain libvirt_test_api started

senario 1:
current user: root
qemu.conf: default (which will be qemu/qemu)
dac label: dynamic default (which will be qemu:qemu)

1. start domain
# virsh start libvirt_test_api
Domain libvirt_test_api started

senario 2:
current user: root
qemu.conf: root/root
dac label: dynamic default(which will be qemu:qemu)

1. set user/group as root in qemu.conf
# vim /etc/libvirt/qemu.conf
...
user = "root"
group = "root"

# service libvirtd restart

2. start domain without static dac

# virsh start libvirt_test_api
error: Failed to start domain libvirt_test_api
error: internal error process exited while connecting to monitor: 2012-11-06 07:31:05.300+0000: 7597: debug : virFileClose:72 : Closed fd 21
2012-11-06 07:31:05.300+0000: 7597: debug : virFileClose:72 : Closed fd 28
2012-11-06 07:31:05.301+0000: 7597: debug : virFileClose:72 : Closed fd 3
char device redirected to /dev/pts/4
qemu-kvm: -drive file=/var/lib/libvirt/images/qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2: could not open disk image /var/lib/libvirt/images/qcow2.img: Permission denied


this is expected, as fail to start with root first, libvirt try to start it with user/group in qemu.conf which is also root, and it will fail.


senario 2:
current user: root
qemu.conf: root/root
dac label: qemu:qemu

4. add static dac and seclable
# virsh edit libvirt_test_api
...
  <seclabel type='static' model='dac' relabel='yes'>
    <label>qemu:qemu</label>
  </seclabel>
...

Domain libvirt_test_api XML configuration edited.

5. start domain

# virsh start libvirt_test_api
error: Failed to start domain libvirt_test_api
error: internal error Process exited while reading console log output: 2012-11-06 07:33:22.962+0000: 7658: debug : virFileClose:72 : Closed fd 21
2012-11-06 07:33:22.962+0000: 7658: debug : virFileClose:72 : Closed fd 28
2012-11-06 07:33:22.963+0000: 7658: debug : virFileClose:72 : Closed fd 3
bind(unix:/var/lib/libvirt/qemu/libvirt_test_api.monitor): Permission denied
chardev: opening backend "socket" failed

the error is at bind the unix socket. I don't know the sequence of libvirt trying to open image file first or bind unix socket first, if first on image then per-VM DAC with open file works. 

One assumption of fail here is as static DAC set, libvirt will try to bind with user/group as in static DAC which is qemu. 



senario 3:
current user: root
qemu.conf: qemu/qemu
dac label: qemu:qemu


1. change user/group in qemu.conf as qemu
# vim /etc/libvirt/qemu.conf
...
user = "qemu"
group = "qemu"

# service libvirtd restart

2. start domain 
# virsh start libvirt_test_api
Domain libvirt_test_api started

There is no bind error. One explanation is that bind happened by current user root here, but then this should not fail at senario 2. 

Hi Eric, 

Can you help explain why the bind fail on senario 2, which will help me to find out whether per-VM dac is working right at open files. 

Thanks.

Comment 4 Ján Tomko 2012-11-09 09:49:12 UTC
At startup, libvirtd changes the ownership of /var/lib/libvirt/qemu (and a few other subdirectories like save and snapshot) to the values set in qemu.conf (or built-in defaults). It doesn't change the permissions.

If these are root:root, qemu running as qemu:qemu can't create a socket in there. It should work if you use the qemu group in both qemu.conf and the seclabel (assuming the directory is writable by the group), or if you adjust the permissions manually.

Comment 7 Martin Kletzander 2013-07-24 13:36:20 UTC
This should be now fixed upstream with commit v1.1.1-rc1-6-g849df28:

commit 849df2875d52aba4d8b82d883c545a7101476d52
Author: Martin Kletzander <mkletzan>
Date:   Fri May 24 18:26:14 2013 +0200

    Make qemuOpenFile aware of per-VM DAC seclabel.

Comment 14 zhenfeng wang 2014-04-15 13:49:18 UTC
Hi Martain
I'm verifing this bug right now, the following were my verify steps, please help have a look whether there are enough or not to verify this bug, thanks

pkg info
libvirt-0.10.2-32.el6.x86_64
kernel-2.6.32-431.14.1.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.424.el6.x86_64

steps 
prepare 1:
In NFS server
1. prepare a domain with img on root_squash nfs
# mount -o vers=3 $nfs_server:/export /var/lib/libvirt/images/

2.Add a user test1 on nfs
# id test1
uid=509(test1) gid=509(test1) groups=509(test1)

# ll /var/lib/libvirt/images/
total 4544356
-rw-r--r--. 1 test1 test1 4649975808 Jun 15 18:42 rhel6.img

IN NFS client:
Add user test1 which have same uid & gid with the user test1 in nfs server
# id test1
uid=509(test1) gid=509(test1) groups=509(test1)

set dynamic_ownership = 0 in qemu.conf

# service libvirtd restart

senario 1:
current user: root
qemu.conf: default (which will be qemu/qemu)
dac label: test1:test1

1.add static dac and seclable
# virsh edit libvirt_test_api
...
  <seclabel type='static' model='dac' relabel='yes'>
    <label>test1:test1</label>
  </seclabel>
...

2. start domain
# virsh start rhel6m
error: Failed to start domain rhel6m
error: internal error Process exited while reading console log output: qemu-kvm: -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel6m.monitor,server,nowait: socket bind failed: Permission denied
qemu-kvm: -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel6m.monitor,server,nowait: chardev: opening backend "socket" failed

qemu running as test1:test1 can't create a socket in /var/lib/libvirt/qemu, since libvirtd have changed the ownership of /var/lib/libvirt/qemu to "qemu:qemu" base the configuration in qemu.conf, so change the ownership to test1:test1, then continue start the guest

# chown test1:test1 /var/lib/libvirt/qemu/
# ll /var/lib/libvirt/qemu/ -d
drwxr-x---. 5 test1 test1 4096 Apr 15 19:48 /var/lib/libvirt/qemu/
# virsh start rhel6m
Domain rhel6m started

3.Check the qemu process's ownership, it was running under test1, the per-VM DAC works
# ps aux|grep rhel6m
test1    24027 45.8  0.8 1491040 292028 ?      Sl   20:08   0:35 /usr/libexec/qemu-kvm -name rhel6m -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off -smp

4.Do some operations with the guest
=======save the guest======
## virsh save rhel6m rhel6m.save
Domain rhel6m saved to rhel6m.save

# virsh restore rhel6m.save 
Domain restored from rhel6m.save

=======Do managedsave with the guest =====
# virsh managedsave rhel6m
Domain rhel6m state saved by libvirt

# virsh start rhel6m
Domain rhel6m started

======DO snapshot with the guest ======
# virsh snapshot-create-as rhel6m 
Domain snapshot 1397564702 created

# virsh snapshot-list rhel6m
 Name                 Creation Time             State
------------------------------------------------------------
 1397564702           2014-04-15 20:25:02 +0800 running

======blockcopy the guest======
#cat rhel6m.xml
<domain type='kvm'>
  <name>rhel6m</name>
---
<emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/mnt/rhel6.img'/>
      <target dev='vda' bus='virtio'/>
    </disk>
--
  <seclabel type='static' model='dac' relabel='yes'>
    <label>test1:test1</label>
  </seclabel>
</domain>
# virsh create rhel6m1.xml 
Domain rhel6m created from rhel6m1.xml

# virsh list
 Id    Name                           State
----------------------------------------------------
 8     rhel6m                         running
 
block copy fail with permission denied error if the destination copy image not locate on the NFS storage, there was an exsiting bug 924151 about this issue, the bug have been cloased WON'T FIX.
# virsh blockcopy rhel6m vda /var/lib/libvirt/images/a.bak
error: internal error unable to execute QEMU command '__com.redhat_drive-mirror': /var/lib/libvirt/images/a.bak: error while creating qcow2: Permission denied

senario 2:
current user: root
qemu.conf: root/root
dac label: dynamic default(which will be qemu:qemu)

1. set user/group as root in qemu.conf
# vim /etc/libvirt/qemu.conf
...
user = "root"
group = "root"

# service libvirtd restart

2. start domain without static dac

# virsh start rhel6m
error: Failed to start domain rhel6m
error: internal error Process exited while reading console log output: char device redirected to /dev/pts/0
qemu-kvm: -drive file=/mnt/rhel6.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none: could not open disk image /mnt/rhel6.img: Permission denied
this is expected, as fail to start with root first, libvirt try to start it with user/group in qemu.conf which is also root, and it will fail.

senario 3:
In nfs server
Change the nfs imgage's ownership to qemu:qemu in nfs server
# chown qemu:qemu rhel6.img 
# ll
total 224
-rw-r--r--. 1 qemu qemu 3881811968 Apr 15 20:32 rhel6.img

In nfs client
current user: root
qemu.conf: qemu/qemu
dac label: qemu:qemu


1. change user/group in qemu.conf as qemu
# vim /etc/libvirt/qemu.conf
...
user = "qemu"
group = "qemu"
set dynamic_ownership = 0 in qemu.conf

# service libvirtd restart

2. start domain 
# virsh start rhel6m
Domain rhel6m started

3.Check the qemu process's ownership, it was running under qemu
# ps aux|grep qemu
qemu     28145  4.2  0.0 1490960 29564 ?       Sl   20:55   0:09 /usr/libexec/qemu-kvm -name rhel6m -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off -smp 

4.Do the operations in step4 in scenario 1
Get the same result with the step4 in scenario 1


prepare 2:
Test the upper scenario on the localhost
senario 1:
current user: root
qemu.conf: default (which will be qemu/qemu)
dac label: test1:test1
dynamic_ownership = 1

1.add static dac and seclable
# virsh edit libvirt_test_api
...
  <seclabel type='static' model='dac' relabel='yes'>
    <label>test1:test1</label>
  </seclabel>
...

2. start domain
# virsh start rhel6m
error: Failed to start domain rhel6m
error: internal error Process exited while reading console log output: qemu-kvm: -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel6m.monitor,server,nowait: socket bind failed: Permission denied
qemu-kvm: -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel6m.monitor,server,nowait: chardev: opening backend "socket" failed

# chown test1:test1 /var/lib/libvirt/qemu/
# ll /var/lib/libvirt/qemu/ -d
drwxr-x---. 5 test1 test1 4096 Apr 15 19:48 /var/lib/libvirt/qemu/
# virsh start rhel6m
Domain rhel6m started

3.Check the qemu process's ownership, it was running under test1, the per-VM DAC works
# ps aux|grep qemu
test1    30858 50.2  0.0 1492016 31852 ?       Sl   21:18   0:04 /usr/libexec/qemu-kvm -name rhel6m -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off

4.Do some operations with the guest
=======save the guest======
## virsh save rhel6m rhel6m.save
Domain rhel6m saved to rhel6m.save

# virsh restore rhel6m.save 
Domain restored from rhel6m.save

=======Do managedsave with the guest =====
# virsh managedsave rhel6m
Domain rhel6m state saved by libvirt

# virsh start rhel6m
Domain rhel6m started

======DO snapshot with the guest ======
# virsh snapshot-create-as rhel6m 
Domain snapshot 1397564702 created

# virsh snapshot-list rhel6m
 Name                 Creation Time             State
------------------------------------------------------------
 1397564702           2014-04-15 20:25:02 +0800 running

======blockcopy the guest======
create a guest with the following label
#cat rhel6m.xm

--
  <seclabel type='static' model='dac' relabel='yes'>
    <label>test1:test1</label>
  </seclabel>
</domain>
# virsh create rhel6m1.xml 
Domain rhel6m created from rhel6m1.xml

# virsh blockcopy rhel6m vda bak
Block Copy started
# virsh blockjob rhel6m vda
Block Copy: [100 %]

Scenario2
current user: root
qemu.conf: qemu/qemu
dac label: qemu:qemu
set dynamic_ownership = 1

# service libvirtd restart

1. start domain 
# virsh start rhel6m
Domain rhel6m started

2.Check the qemu process's ownership, it was running under qemu
# ps aux|grep qemu
qemu     28145  4.2  0.0 1490960 29564 ?       Sl   20:55   0:09 /usr/libexec/qemu-kvm -name rhel6m -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off -smp 

3.Do the operations in step4 in scenario 1
Get the same result with the step4 in scenario 1

Comment 15 Martin Kletzander 2014-04-16 10:15:36 UTC
I'd say starting domain that has its disk on root_squashed NFS with different DAC label than default is enough, but checking the snapshots and block-copy is great.  This is definitely verified, thanks.

Comment 16 zhenfeng wang 2014-04-16 10:34:38 UTC
Thanks for Martin's patient helping, mark this bug verifid. Some other steps may helpful for our later bug go though:

On nfs server
 Create 1 img on the nfs server which have the same ownership with the guest's img
# cat /etc/exports 
/export *(rw,async,root_squash)

# ll /export
total 3560
-rw-r--r--. 1 qemu qemu 3881811968 Apr 15 20:58 rhel6.img
-rw-r--r--. 1 qemu qemu 1073741824 Apr 16 18:11 vdb.img

On nfs client

current user: root
qemu.conf: default (which will be qemu/qemu)
dac label: qemu:qemu
dynamic_ownership = 1

1.add static dac and seclable
# virsh edit libvirt_test_api
...
  <seclabel type='static' model='dac' relabel='yes'>
    <label>test1:test1</label>
  </seclabel>
...

2.start the guest
# virsh start rhel6m
Domain rhel6m started

3.hotplug the disk we created in the nfs server
# virsh attach-disk rhel6m /mnt/vdb.img vdb
Disk attached successfully

# virsh dumpxml rhel6m |grep "disk type" -A 5
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/mnt/rhel6.img'>
        <seclabel model='selinux' relabel='no'/>
      </source>
      <target dev='vda' bus='virtio'/>
--
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/vdb.img'>
        <seclabel model='selinux' relabel='no'/>
      </source>
      <target dev='vdb' bus='virtio'/>

# virsh detach-disk rhel6m vdb
Disk detached successfully

Comment 18 errata-xmlrpc 2014-10-14 04:14:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1374.html


Note You need to log in before you can comment on or make changes to this bug.