Bug 601249
Summary: | [vdsm] [libvirt intg] unable to start vm while selinux is in enforcing state (unable to access disk image) | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Haim <hateya> | ||||
Component: | selinux-policy | Assignee: | Miroslav Grepl <mgrepl> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Milos Malik <mmalik> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 6.1 | CC: | antillon.maurizio, bazulay, berrange, danken, dhiller, dwalsh, hateya, iheim, jrieden, mgoldboi, mmalik, Rhev-m-bugs, syeghiay, xen-maint, yeylon, ykaul | ||||
Target Milestone: | rc | Keywords: | Reopened | ||||
Target Release: | --- | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | vdsm & libvirt integration | ||||||
Fixed In Version: | selinux-policy-3.7.19-37.el6 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2010-11-10 21:34:34 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 581275, 598533 | ||||||
Attachments: |
|
Description
Haim
2010-06-07 14:57:16 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux major release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Major release. This request is not yet committed for inclusion. Can you show me these two too: # ls -lZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6 # ls -lZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b And also please attach the /var/log/audit/audit.log file showing the AVCs that occur. attached --> this is a new repro though so files are different: libvirtError: internal error process exited while connecting to monitor: qemu: could not open disk image /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/c6bda33d-1e60-4daa-9bb3-f7e0618e98a1/d563b7e7-6611-4555-8cf7-43b5129ec19d: Permission denied bash-4.1$ ls -lZ /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/c6bda33d-1e60-4daa-9bb3-f7e0618e98a1/d563b7e7-6611-4555-8cf7-43b5129ec19d lrwxrwxrwx. vdsm kvm system_u:object_r:default_t:s0 /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/c6bda33d-1e60-4daa-9bb3-f7e0618e98a1/d563b7e7-6611-4555-8cf7-43b5129ec19d -> /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/d563b7e7-6611-4555-8cf7-43b5129ec19d bash-4.1$ ls -LZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/d563b7e7-6611-4555-8cf7-43b5129ec19d ls: cannot access /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/d563b7e7-6611-4555-8cf7-43b5129ec19d: No such file or directory bash-4.1$ ls -lZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b lrwxrwxrwx. root root system_u:object_r:device_t:s0 ids -> ../dm-15 lrwxrwxrwx. root root system_u:object_r:device_t:s0 inbox -> ../dm-16 lrwxrwxrwx. root root system_u:object_r:device_t:s0 leases -> ../dm-14 lrwxrwxrwx. root root system_u:object_r:device_t:s0 master -> ../dm-18 lrwxrwxrwx. root root system_u:object_r:device_t:s0 metadata -> ../dm-13 lrwxrwxrwx. root root system_u:object_r:device_t:s0 outbox -> ../dm-17 bash-4.1$ type=CRED_ACQ msg=audit(1275996141.412:2110): user pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=P AM:setcred acct="root" exe="/usr/bin/sudo" hostname=white-vdse.eng.lab.tlv.redhat.com addr=10.35.16.205 terminal=? res=success' type=USER_START msg=audit(1275996141.413:2111): user pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op =PAM:session_open acct="root" exe="/usr/bin/sudo" hostname=white-vdse.eng.lab.tlv.redhat.com addr=10.35.16.205 terminal=? res=success' type=USER_END msg=audit(1275996141.413:2112): user pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_close acct="root" exe="/usr/bin/sudo" hostname=white-vdse.eng.lab.tlv.redhat.com addr=10.35.16.205 terminal=? res=success' type=USER_CMD msg=audit(1275996141.413:2113): user pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='cwd="/" cmd=2F62696E2F63686F776E207664736D3A6B766D202F6465762F64353864383363352D353030382D343930362D396334342D626239393135343238333237202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F6D65746164617461202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F6C6561736573202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F696473202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F696E626F78202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F6F7574626F78 terminal=? res=success' Sorry, I meant to ask for 'ls -alZ' rather than just 'ls -lZ', so that it shows the directory permission too. Can you show me the directory again: ls -alZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b Also is that really the full audit.loog contents ? There are no 'AVC' lines in what you posted, which makes it unlikely to be an SELinux problem the log appears under /var/log/messages and not /var/audit/ and looks like this: Jun 8 15:27:36 white-vdse kernel: type=1400 audit(1276000056.142:8): avc: denied { read } for pid=3598 comm="qemu-kvm" name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117 scontext=system_u:system_r:qemu_t:s0-s0:c0.c1023 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file Jun 8 15:27:36 white-vdse kernel: type=1400 audit(1276000056.160:9): avc: denied { read } for pid=3598 comm="qemu-kvm" name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117 scontext=system_u:system_r:qemu_t:s0-s0:c0.c1023 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file bash-4.1$ ls -alZ /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6 lrwxrwxrwx. vdsm kvm system_u:object_r:default_t:s0 /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6 -> /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6 bash-4.1$ ls -alZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b drwxr-xr-x. vdsm kvm system_u:object_r:device_t:s0 . drwxr-xr-x. root root system_u:object_r:device_t:s0 .. lrwxrwxrwx. root root system_u:object_r:device_t:s0 ids -> ../dm-15 lrwxrwxrwx. root root system_u:object_r:device_t:s0 inbox -> ../dm-16 lrwxrwxrwx. root root system_u:object_r:device_t:s0 leases -> ../dm-14 lrwxrwxrwx. root root system_u:object_r:device_t:s0 master -> ../dm-18 lrwxrwxrwx. root root system_u:object_r:device_t:s0 metadata -> ../dm-13 lrwxrwxrwx. root root system_u:object_r:device_t:s0 outbox -> ../dm-17 > denied { read } for pid=3598 comm="qemu-kvm"
> name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117
> scontext=system_u:system_r:qemu_t:s0-s0:c0.c1023
> . tcontext=system_u:object_r:default_t:s0 tclass=lnk_file
Ok so it appears that '/dev/d9124e52-d42a-4b0c-8657-523bc5b6733b' is not a directory itself, but rather a symlink to a directory. And SELinux appears to be forbidding QEMU permission to follow the symlink. I'm not sure whether this is an SELinux policy bug, or a mistake in labelling somewhere yet.
SELinux bug. Miroslav add dev_read_generic_symlinks(virt_domain) to virt.te and ######################################## ## <summary> ## Read symbolic links in device directories. ## </summary> ## <param name="domain"> ## <summary> ## Domain allowed access. ## </summary> ## </param> # interface(`dev_read_generic_symlinks',` gen_require(` type device_t; ') allow $1 device_t:lnk_file read_lnk_file_perms; ') to devices.if *** Bug 594410 has been marked as a duplicate of this bug. *** Fixed in selinux-policy-3.7.19-24.el6.noarch downloaded rpm manually from brew, installed and tested, still, get same result: selinux-policy-3.7.19-24.el6.noarch 17:58:03.217: error : qemuConnectMonitor:1577 : Failed to connect monitor for libvirt-pool-03 17:58:03.217: error : qemudWaitForMonitor:2498 : internal error process exited while connecting to monitor: qemu: could not open disk image /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/a6431da5-09b5-42b0-8c53-a0f454bc8925/9205859a-bc75-400d-b9c2-7a15d5188c81: Permission denied Jun 10 17:58:00 white-vdse kernel: type=1400 audit(1276181880.288:4): avc: denied { read } for pid=3430 comm="qemu-kvm" name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117 scontext=system_u:system_r:svirt_t:s0:c195,c370 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file Jun 10 17:58:00 white-vdse kernel: type=1400 audit(1276181880.302:5): avc: denied { read } for pid=3430 comm="qemu-kvm" name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117 scontext=system_u:system_r:svirt_t:s0:c195,c370 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file [root@whit Dan, it looks we also need files_read_default_symlinks(virt_domain) No, I think we need to label /rhev? What are we using this directory for? /rhev is where we mount the storage pool, storage domains and the list of disk images (which are in the storage domains). it can mount nfs domains, or FC/iSCSI based VGs. THen lets label it mnt_t. chcon -t mnt_t /rhev Miroslav add /rhev -d gen_context(system_u:object_r:mnt_t,s0) Fixed in selinux-policy-3.7.19-26.el6. moving back to assignee as this issue failed qa. trying to start vm with selinux enabled (running on block device) results with an unexpected error produced by qemu that it has no permission. \this bug is failed QA as we still hit the original issue. trying to start a vm over libvirt & qemu running on iscsi block device with selinux enabled on host results with the following error: File "/usr/share/vdsm/vm.py", line 574, in _execqemu self._run() File "/usr/share/vdsm/libvirtvm.py", line 571, in _run self._connection.createXML(domxml, flags), File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1282, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: internal error process exited while connecting to monitor: qemu: could not open disk image /rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/aaac4a9b-ae1f-4e4b-9c71-d25eb10bc83f/images/8fad46f6-c802-4328-a38a-0564068bdfcc/783df1d0-0485-4585-b0ea-9e3c776b4eb8: Permission denied Thread-779::ERROR::2010-07-12 14:53:20,521::vm::615::vds.vmlog.57b52f3e-13e3-4388-9743-6d28bd63f9c9::Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 611, in _getQemuError for line in file(self.dumpFile).readlines(): IOError: [Errno 2] No such file or directory: '/var/run/vdsm/57b52f3e-13e3-4388-9743-6d28bd63f9c9.stdio.dump' Thread-779::DEBUG::2010-07-12 14:53:20,521::vm::1662::vds.vmlog.57b52f3e-13e3-4388-9743-6d28bd63f9c9::Changed state to Down: Unexpected Create Error from Dan's comment, it looks like directories under /rhev should have labelled with 'mnt_t' though are still using old label: lrwxrwxrwx. vdsm kvm system_u:object_r:default_t:s0 /rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/aaac4a9b-ae1f-4e4b-9c71-d25eb10bc83f/images/8fad46f6-c802-4328-a38a-0564068bdfcc/783df1d0-0485-4585-b0ea-9e3c776b4eb8 -> /dev/aaac4a9b-ae1f-4e4b-9c71-d25eb10bc83f/783df1d0-0485-4585-b0ea-9e3c776b4eb8 update: I didn't use the correct switch (ls -lZd), which shows that /rhev/ has mnt_t context [root@pele ~]# ls -lZd /rhev/ drwxr-xr-x. root root system_u:object_r:mnt_t:s0 /rhev/ however, new mounts created under this directory on the fly, gets the following context: [root@pele ~]# ls -lZd /rhev/data-center/ drwxr-xr-x. vdsm kvm system_u:object_r:default_t:s0 /rhev/data-center/ when I tried to do it manually using chcon using the following command: [root@pele ~]# chcon -t mnt_t /rhev/data-center/ [root@pele ~]# ls -lZd /rhev/data-center/ and it looks like security context has changed correctly, though, I still cannot start vms as i get the following AVC: type=AVC msg=audit(1278941168.858:117904): avc: denied { read } for pid=26194 comm="qemu-kvm" name="aaac4a9b-ae1f-4e4b-9c71-d25eb10bc83f" dev=dm-0 ino=91395 2 scontext=system_u:system_r:svirt_t:s0:c418,c999 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file type=SYSCALL msg=audit(1278941168.858:117904): arch=c000003e syscall=2 success=no exit=-13 a0=118f4c0 a1=84002 a2=0 a3=40 items=0 ppid=1 pid=26194 auid=4294967 295 uid=36 gid=36 euid=36 suid=36 fsuid=36 egid=36 sgid=36 fsgid=36 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_ r:svirt_t:s0:c418,c999 key=(null) type=ANOM_PROMISCUOUS msg=audit(1278941168.924:117905): dev=vnet0 prom=0 old_prom=256 auid=4294967295 uid=36 gid=36 ses=4294967295 type=CRED_ACQ msg=audit(1278941172.439:117906): user pid=26229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:setcred acct ="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success' type=USER_START msg=audit(1278941172.439:117907): user pid=26229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_op en acct="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success' type=USER_END msg=audit(1278941172.439:117908): user pid=26229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_clos e acct="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success' type=USER_CMD msg=audit(1278941172.440:117909): user pid=26229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='cwd="/" cmd=2F73626 96E2F697363736961646D202D6D2073657373696F6E202D52 terminal=? res=success' type=CRED_ACQ msg=audit(1278941172.460:117910): user pid=26230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:setcred acct ="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success' type=USER_START msg=audit(1278941172.460:117911): user pid=26230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_op en acct="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success' type=USER_END msg=audit(1278941172.461:117912): user pid=26230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_clos e acct="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success' type=USER_CMD msg=audit(1278941172.461:117913): user pid=26230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='cwd="/" cmd=2F62696 : I guess we need /rhev -d gen_context(system_u:object_r:mnt_t,s0) /rhev(/[^/]*)? -d gen_context(system_u:object_r:mnt_t,s0) /rhev/[^/]*/.* <<none>> Just like the labels of /mnt Fixed in selinux-policy-3.7.19-32.el6.noarch It looks like policy was fixed, and now all mount has the correct LABEL, however, I am still unable to start guests when SELINUX is enforce. policy looks as follows: [root@infra-vdsa ~]# semanage fcontext -l |grep rhev /rhev directory system_u:object_r:mnt_t:s0 /rhev(/[^/]*)? directory system_u:object_r:mnt_t:s0 /rhev/[^/]*/.* all files <<None>> when I start the machine, I get the following permission error: 07:54:20.914: error : qemudWaitForMonitor:2548 : internal error process exited while connecting to monitor: qemu: could not open disk image /rhev/data-center/8 41af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/c8843acb-d2e0-4f62-9233-173f0261cf18/a6957d93-268c-4fd5-9a39-8e4115ad6c6b: Perm ission denied permissions: [root@infra-vdsa ~]# ls -lZd /rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/c8843acb-d2e0-4f62-9233-173f0261cf18/a6957d93-268c-4fd5-9a39-8e4115ad6c6b lrwxrwxrwx. vdsm kvm system_u:object_r:mnt_t:s0 /rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/c8843acb-d2e0-4f62-9233-173f0261cf18/a6957d93-268c-4fd5-9a39-8e4115ad6c6b -> /dev/88703353-1968-4875-bdc5-604582582f22/a6957d93-268c-4fd5-9a39-8e4115ad6c6b [root@infra-vdsa ~]# ls -lZd /dev/88703353-1968-4875-bdc5-604582582f22/ drwxr-xr-x. vdsm kvm system_u:object_r:device_t:s0 /dev/88703353-1968-4875-bdc5-604582582f22/ please feel free to contact me in case you want to inspect the machine and configuration. also note that I performed reboot after setting SELINUX to enforce (for re-labelling to take affect). move to assigned. Any AVC messages? type=AVC msg=audit(1281255977.973:1864353): avc: denied { read } for pid=26818 comm="qemu-kvm" name="1a5d692c-db3f-45cd-9f11-34be4fb86b6d" dev=dm-0 ino=2611 73 scontext=system_u:system_r:svirt_t:s0:c266,c992 tcontext=unconfined_u:object_r:mnt_t:s0 tclass=lnk_file type=SYSCALL msg=audit(1281255977.973:1864353): arch=c000003e syscall=2 success=yes exit=9 a0=2565a10 a1=800 a2=0 a3=0 items=0 ppid=1 pid=26818 auid=0 uid=36 g id=36 euid=36 suid=36 fsuid=36 egid=36 sgid=36 fsgid=36 tty=(none) ses=3 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c266,c99 2 key=(null) Created attachment 437419 [details]
full audit.log
Haim, if you execute # grep svirt_t audit.log | audit2allow -M mysvirt # semodule -i mysvirt.pp Does it work? yes. it's working. need any logs ? (In reply to comment #31) > yes. it's working. need any logs ? Ok, thanks for testing. I am fixing it. Fixed in selinux-policy-3.7.19-37.el6.noarch verified. steps for verification: 1) setenforce 1 2) service vdsmd restart (also restart libvirtd) 3) start new guest on host - successful. please note that I monitored /var/log/audit/audit.log and didn't see any AVC, nor something in /var/log/messages. fixed on following versions: selinux-policy-targeted-3.7.19-38.el6.noarch libselinux-utils-2.0.94-1.el6.x86_64 selinux-policy-3.7.19-38.el6.noarch libselinux-2.0.94-1.el6.x86_64 libselinux-debuginfo-2.0.94-1.el6.x86_64 2.6.32-59.1.el6.x86_64 libvirt-0.8.1-23.el6.x86_64 vdsm-4.9-12.2.x86_64 device-mapper-multipath-0.4.9-25.el6.x86_64 lvm2-2.02.72-4.el6.x86_64 qemu-kvm-0.12.1.2-2.109.el6.x86_64 kept on testing and it seems like it doesn't work on NFS mount point. repro steps are quit simple: 1) work on NFS storage 2) crate new vm (guest machine) 3) setenforce 1 4) start (virsh create) 11:46:42.886: info : qemuConnectMonitor:1617 : Failed to connect monitor for nfsvirt-rhel5 11:46:42.886: error : qemudWaitForMonitor:2548 : internal error process exited while connecting t o monitor: qemu: could not open disk image /rhev/data-center/dff3b690-519b-4c05-b790-0b52837f40c3 /8fbff449-eeeb-478f-b40c-bc4001372902/images/83128bab-8f05-4278-937e-e6141c03bd6f/4ff266a4-7bf6-4 008-ab42-2a54868c924b: Permission denied ------------------------------------------------ [root@silver-vdse ~]# ls -Z /rhev/data-center/dff3b690-519b-4c05-b790-0b52837f40c3/ lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0 250afad0-bed7-4bce-8841-73906e1c3e14 -> /rhev/data-center/mnt/qanashead.qa.lab.tlv.redhat.com:_export_hateya_rhel6.0-data2/250afad0-bed7-4bce-8841-73906e1c3e14 lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0 8fbff449-eeeb-478f-b40c-bc4001372902 -> /rhev/data-center/mnt/qanashead.qa.lab.tlv.redhat.com:_export_hateya_rhel6.0-data1/8fbff449-eeeb-478f-b40c-bc4001372902 lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0 mastersd -> 8fbff449-eeeb-478f-b40c-bc4001372902 lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0 tasks -> mastersd/master/tasks lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0 vms -> mastersd/master/vms ------------------------------------------------ [root@silver-vdse ~]# ls -Z /rhev/data-center/dff3b690-519b-4c05-b790-0b52837f40c3/8fbff449-eeeb-478f-b40c-bc4001372902/ drwxr-xr-x. vdsm kvm system_u:object_r:nfs_t:s0 dom_md drwxr-xr-x. vdsm kvm system_u:object_r:nfs_t:s0 images drwxr-xr-x. vdsm kvm system_u:object_r:nfs_t:s0 master Could you execute setsebool -P virt_use_nfs 1 Does the problem go away? If no, please switch to permissive mode and attach AVC messages which you are seeing. yes, it goes away,though I still see the following libvirt error: 8c│13:19:59.558: warning : virDomainDiskDefForeachPath:7654 : Ignoring open failure on /rhev/data-center│ Th│/dff3b690-519b-4c05-b790-0b52837f40c3/250afad0-bed7-4bce-8841-73906e1c3e14/images/858b7445-3ecf-4b8b-│et ur│ae2e-c8d8a0ba9541/dacc7c3f-967c-4d24-abb8-d3bc621f9c04: Permission denied is it related ? Are you seeing it also in permissive mode? > 8c│13:19:59.558: warning : virDomainDiskDefForeachPath:7654 : Ignoring open
> failure on /rhev/data-center│
> Th│/dff3b690-519b-4c05-b790-0b52837f40c3/250afad0-bed7-4bce-8841-73906e1c3e14
> /images/858b7445-3ecf-4b8b-│et
> ur│ae2e-c8d8a0ba9541/dacc7c3f-967c-4d24-abb8-d3bc621f9c04: Permission denied
VDSM creates its directories with a non-root user/group ownership, so if you have root_squash NFS, libvirtd won't be able to open the path. This isn't a problem as long as VDSM has given the correct permissions for QEMU itself to open the path
iscsi part was fixed, waiting for vdsm fix for NFS part (described in 624432). this bug can move to verified, as 624432 was fixed by vdsm. manage to run vm under both nfs and iscsi storages with selinux on (enforcing). selinux-policy-3.7.19-54.el6.noarch 2.6.32-71.el6.x86_64 libvirt-0.8.1-27.el6.x86_64 vdsm-4.9-14.el6.x86_64 device-mapper-multipath-0.4.9-30.el6.x86_64 lvm2-2.02.72-8.el6.x86_64 qemu-kvm-0.12.1.2-2.113.el6.x86_64 iptables-1.4.7-3.el6.x86_64 Red Hat Enterprise Linux 6.0 is now available and should resolve the problem described in this bug report. This report is therefore being closed with a resolution of CURRENTRELEASE. You may reopen this bug report if the solution does not work for you. |