Bug 601249

Summary: [vdsm] [libvirt intg] unable to start vm while selinux is in enforcing state (unable to access disk image)
Product: Red Hat Enterprise Linux 6 Reporter: Haim <hateya>
Component: selinux-policyAssignee: Miroslav Grepl <mgrepl>
Status: CLOSED CURRENTRELEASE QA Contact: Milos Malik <mmalik>
Severity: high Docs Contact:
Priority: high    
Version: 6.1CC: antillon.maurizio, bazulay, berrange, danken, dhiller, dwalsh, hateya, iheim, jrieden, mgoldboi, mmalik, Rhev-m-bugs, syeghiay, xen-maint, yeylon, ykaul
Target Milestone: rcKeywords: Reopened
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard: vdsm & libvirt integration
Fixed In Version: selinux-policy-3.7.19-37.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-11-10 21:34:34 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 581275, 598533    
Attachments:
Description Flags
full audit.log none

Description Haim 2010-06-07 14:57:16 UTC
Description of problem:

unable to start vm on iscsi block device while selinux is set to 'enforcing' as I get an unexpected error. 
digging down a bit shows that the error comes from libvirt: 

14:58:32.439: error : qemudWaitForMonitor:2498 : internal error process exited
while connecting to monitor: qemu: could not open disk image
/rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6:
Permission denied

when I set selinux to 'permissive' I am able to run the vm and log looks as follows: 


17:50:45.579: debug : virDomainInterfaceStats:4318 : domain=0x7f6e3800c550, path=vnet0, stats=0x7f6e4e35baa0, size=64
17:50:45.579: debug : virDomainFree:2215 : domain=0x7f6e3800c550
17:50:45.580: debug : virDomainGetInfo:2990 : domain=0x7f6e3c0009a0, info=0x7f6e4d95aab0
17:50:45.580: debug : qemudGetProcessInfo:4448 : Got status for 18991/0 user=986 sys=1831 cpu=2
17:50:45.580: debug : qemuMonitorGetBalloonInfo:993 : mon=0x7f6e38002a60
17:50:45.580: debug : qemuMonitorJSONCommandWithFd:217 : Send command '{"execute":"query-balloon"}' for write with FD -1
17:50:45.580: debug : qemuMonitorJSONIOProcessLine:115 : Line [{"return": {"actual": 536870912}}]
17:50:45.580: debug : qemuMonitorJSONIOProcess:188 : Total used 35 bytes out of 35 available in buffer
17:50:45.580: debug : qemuMonitorJSONCommandWithFd:222 : Receive command reply ret=0 errno=0 33 bytes '{"return": {"actual": 536870912}}'
17:50:45.580: debug : virDomainFree:2215 : domain=0x7f6e3c0009a0
17:50:47.581: debug : virDomainInterfaceStats:4318 : domain=0x7f6e30000980, path=vnet0, stats=0x7f6e4cf59aa0, size=64
17:50:47.582: debug : virDomainFree:2215 : domain=0x7f6e30000980
17:50:47.582: debug : virDomainGetInfo:2990 : domain=0x7f6e400009a0, info=0x7f6e4f75dab0
17:50:47.582: debug : qemudGetProcessInfo:4448 : Got status for 18991/0 user=988 sys=1836 cpu=0
17:50:47.582: debug : qemuMonitorGetBalloonInfo:993 : mon=0x7f6e38002a60
17:50:47.582: debug : qemuMonitorJSONCommandWithFd:217 : Send command '{"execute":"query-balloon"}' for write with FD -1
17:50:47.583: debug : qemuMonitorJSONIOProcessLine:115 : Line [{"return": {"actual": 536870912}}]
17:50:47.583: debug : qemuMonitorJSONIOProcess:188 : Total used 35 bytes out of 35 available in buffer
17:50:47.583: debug : qemuMonitorJSONCommandWithFd:222 : Receive command reply ret=0 errno=0 33 bytes '{"return": {"actual": 536870912}}'
17:50:47.583: debug : virDomainFree:2215 : domain=0x7f6e400009a0
17:50:49.584: debug : virDomainInterfaceStats:4318 : domain=0x7f6e4409ec30, path=vnet0, stats=0x7f6e4ed5caa0, size=64
17:50:49.584: debug : virDomainFree:2215 : domain=0x7f6e4409ec30
17:50:49.584: debug : virDomainGetInfo:2990 : domain=0x7f6e3800c550, info=0x7f6e4e35bab0
17:50:49.584: debug : qemudGetProcessInfo:4448 : Got status for 18991/0 user=989 sys=1841 cpu=2
17:50:49.584: debug : qemuMonitorGetBalloonInfo:993 : mon=0x7f6e38002a60
17:50:49.585: debug : qemuMonitorJSONCommandWithFd:217 : Send command '{"execute":"query-balloon"}' for write with FD -1
17:50:49.585: debug : qemuMonitorJSONIOProcessLine:115 : Line [{"return": {"actual": 536870912}}]
17:50:49.585: debug : qemuMonitorJSONIOProcess:188 : Total used 35 bytes out of 35 available in buffer
17:50:49.585: debug : qemuMonitorJSONCommandWithFd:222 : Receive command reply ret=0 errno=0 33 bytes '{"return": {"actual": 536870912}}'
17:50:49.585: debug : virDomainFree:2215 : domain=0x7f6e3800c550
17:50:51.220: debug : virDomainDestroy:2172 : domain=0x7f6e3c0009a0
17:50:51.220: debug : qemudShutdownVMDaemon:4103 : Shutting down VM 'libvirt-pool-02' migrated=0
17:50:51.220: debug : qemuMonitorClose:682 : mon=0x7f6e38002a60
17:50:51.221: debug : qemuMonitorFree:200 : mon=0x7f6e38002a60
17:50:51.473: debug : virDomainFree:2215 : domain=0x7f6e3c0009a0
17:50:51.473: debug : remoteRelayDomainEventLifecycle:118 : Relaying domain lifecycle event 5 1
17:50:51.473: debug : virDomainFree:2215 : domain=0x7f6e48079030
17:50:51.590: debug : virDomainInterfaceStats:4318 : domain=0x7f6e30000980, path=vnet0, stats=0x7f6e4cf59aa0, size=64
17:50:51.590: error : qemudDomainInterfaceStats:9647 : Domain not found: no domain with matching uuid '9ce42676-c08b-40f8-959c-c326caadc9b6'
17:50:51.590: debug : virDomainFree:2215 : domain=0x7f6e30000980
17:51:04.922: debug : virDomainCreateXML:1937 : conn=0x7f6e44000a30, xmlDesc=<?xml version="1.0" ?>
<domain type="kvm">
        <name>libvirt-pool-02</name>
        <uuid>9ce42676-c08b-40f8-959c-c326caadc9b6</uuid>
        <memory>524288</memory>
        <currentMemory>524288</currentMemory>
        <vcpu>1</vcpu>
        <devices>
                <disk device="disk" type="block">
                        <source dev="/rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6"/>
                        <target bus="ide" dev="hda"/>
                        <serial>4c-b983-b826bb5ee037</serial>
                        <driver cache="none" name="qemu" type="qcow2"/>
                </disk>
                <controller index="0" ports="16" type="virtio-serial"/>
                <channel type="unix">
                        <target name="org.linux-kvm.port.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/libvirt-pool-02.org.linux-kvm.port.0"/>
                </channel>
                <interface type="bridge">
                        <mac address="00:1a:4a:23:71:11"/>
                        <model type="virtio"/>
                        <source bridge="rhevm"/>
                </interface>
                <input bus="usb" type="tablet"/>
                <graphics autoport="yes" keymap="en-us" listen="0" passwd="12345" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"/>
        </devices>
        <os>
                <type arch="x86_64" machine="pc">hvm</type>
                <boot dev="hd"/>
        </os>
        <clock adjustment="10800" offset="variable"/>
        <features>
                <acpi/>
        </features>
        <cpu match="exact">
                <model>qemu64</model>
                <topology cores="1" sockets="1" threads="1"/>
                <feature name="nx" policy="disable"/>
                <feature name="sse2" policy="require"/>
                <feature name="svm" policy="disable"/>
        </cpu>
</domain>
, flags=0
17:51:04.935: info : qemudDispatchSignalEvent:397 : Received unexpected signal 17
17:51:05.026: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr 0:0:0
17:51:05.026: info : qemudDispatchSignalEvent:397 : Received unexpected signal 17
17:51:05.026: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr 0:0:3
17:51:05.026: debug : qemuDomainPCIAddressSetNextAddr:2242 : Allocating PCI addr 0:0:4
17:51:05.026: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr 0:0:2
17:51:05.026: debug : qemuDomainPCIAddressSetNextAddr:2242 : Allocating PCI addr 0:0:5
17:51:05.026: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr 0:0:1
17:51:05.026: debug : qemudStartVMDaemon:3737 : Beginning VM startup process
17:51:05.026: debug : qemudStartVMDaemon:3746 : Preparing host devices
17:51:05.026: debug : qemudStartVMDaemon:3752 : Generating domain security label (if required)
17:51:05.026: debug : qemudStartVMDaemon:3758 : Generating setting domain security labels (if required)
17:51:05.026: debug : qemudStartVMDaemon:3766 : Ensuring no historical cgroup is lying around
17:51:05.030: debug : qemudStartVMDaemon:3810 : Creating domain log file
17:51:05.031: debug : qemudStartVMDaemon:3827 : Determing emulator version
17:51:05.042: info : qemudDispatchSignalEvent:397 : Received unexpected signal 17
17:51:05.124: info : qemudDispatchSignalEvent:397 : Received unexpected signal 17
17:51:05.124: debug : qemudStartVMDaemon:3833 : Setting up domain cgroup (if required)
17:51:05.125: debug : qemuSetupDiskCgroup:3419 : Process path /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6 for disk
17:51:05.127: debug : qemuSetupDiskCgroup:3419 : Process path /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/../9f076c05-0eab-414c-b983-b826bb5ee037/7738cf8a-715b-49d3-bc2d-40726352bd6a for disk
17:51:05.128: debug : qemudStartVMDaemon:3842 : Preparing monitor state


full error message once selinux is 'enforcing' looks as follows: 
<domain type="kvm">
        <name>libvirt-pool-02</name>
        <uuid>9ce42676-c08b-40f8-959c-c326caadc9b6</uuid>
        <memory>524288</memory>
        <currentMemory>524288</currentMemory>
        <vcpu>1</vcpu>
        <devices>
                <disk device="disk" type="block">
                        <source
dev="/rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6"/>
                        <target bus="ide" dev="hda"/>
                        <serial>4c-b983-b826bb5ee037</serial>
                        <driver cache="none" name="qemu" type="qcow2"/>
                </disk>
                <controller index="0" ports="16" type="virtio-serial"/>
                <channel type="unix">
                        <target name="org.linux-kvm.port.0" type="virtio"/>
                        <source mode="bind"
path="/var/lib/libvirt/qemu/channels/libvirt-pool-02.org.linux-kvm.port.0"/>
                </channel>
                <interface type="bridge">
                        <mac address="00:1a:4a:23:71:11"/>
                        <model type="virtio"/>
                        <source bridge="rhevm"/>
                </interface>
                <input bus="usb" type="tablet"/>
                <graphics autoport="yes" keymap="en-us" listen="0"
passwd="12345" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"/>
        </devices>
        <os>
                <type arch="x86_64" machine="pc">hvm</type>
                <boot dev="hd"/>
        </os>
        <clock adjustment="10800" offset="variable"/>
        <features>
                <acpi/>
        </features>
        <cpu match="exact">
                <model>qemu64</model>
                <topology cores="1" sockets="1" threads="1"/>
                <feature name="nx" policy="disable"/>
                <feature name="sse2" policy="require"/>
                <feature name="svm" policy="disable"/>
        </cpu>
</domain>
, flags=0
14:58:29.146: info : qemudDispatchSignalEvent:397 : Received unexpected signal
17
14:58:29.238: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr
0:0:0
14:58:29.238: info : qemudDispatchSignalEvent:397 : Received unexpected signal
17
14:58:29.238: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr
0:0:3
14:58:29.238: debug : qemuDomainPCIAddressSetNextAddr:2242 : Allocating PCI
addr 0:0:4
14:58:29.238: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr
0:0:2
14:58:29.238: debug : qemuDomainPCIAddressSetNextAddr:2242 : Allocating PCI
addr 0:0:5
14:58:29.238: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr
0:0:1
14:58:29.238: debug : qemudStartVMDaemon:3737 : Beginning VM startup process
14:58:29.238: debug : qemudStartVMDaemon:3746 : Preparing host devices
14:58:29.238: debug : qemudStartVMDaemon:3752 : Generating domain security
label (if required)
14:58:29.238: debug : qemudStartVMDaemon:3758 : Generating setting domain
security labels (if required)
14:58:29.238: debug : qemudStartVMDaemon:3766 : Ensuring no historical cgroup
is lying around
14:58:29.243: debug : qemudStartVMDaemon:3810 : Creating domain log file
14:58:29.243: debug : qemudStartVMDaemon:3827 : Determing emulator version
14:58:29.254: info : qemudDispatchSignalEvent:397 : Received unexpected signal
17
14:58:29.369: debug : qemudStartVMDaemon:3833 : Setting up domain cgroup (if
required)
14:58:29.369: info : qemudDispatchSignalEvent:397 : Received unexpected signal
17
14:58:29.370: debug : qemuSetupDiskCgroup:3419 : Process path
/rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6
for disk
14:58:29.373: debug : qemuSetupDiskCgroup:3419 : Process path
/rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/../9f076c05-0eab-414c-b983-b826bb5ee037/7738cf8a-715b-49d3-bc2d-40726352bd6a
for disk
14:58:29.377: debug : qemudStartVMDaemon:3842 : Preparing monitor state
14:58:29.378: debug : qemudStartVMDaemon:3874 : Assigning domain PCI addresses
14:58:29.378: debug : qemuCollectPCIAddress:2105 : Remembering PCI addr 0:0:4
14:58:29.378: debug : qemuCollectPCIAddress:2105 : Remembering PCI addr 0:0:2
14:58:29.378: debug : qemuCollectPCIAddress:2105 : Remembering PCI addr 0:0:5
14:58:29.378: debug : qemuCollectPCIAddress:2105 : Remembering PCI addr 0:0:1
14:58:29.378: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr
0:0:0
14:58:29.378: debug : qemuDomainPCIAddressReserveAddr:2148 : Reserving PCI addr
0:0:3
14:58:29.378: debug : qemudStartVMDaemon:3893 : Building emulator command line
14:58:29.388: info : qemudDispatchSignalEvent:397 : Received unexpected signal
17
14:58:29.437: info : qemudDispatchSignalEvent:397 : Received unexpected signal
17
14:58:29.437: debug : qemudStartVMDaemon:4000 : Waiting for monitor to show up
14:58:29.437: debug : qemudWaitForMonitor:2462 : Connect monitor to
0x7f46a400c640 'libvirt-pool-02'
14:58:32.439: error : qemuMonitorOpenUnix:277 : monitor socket did not show
up.: Connection refused
14:58:32.439: debug : qemuMonitorClose:682 : mon=0x7f46a4002a30
14:58:32.439: debug : qemuMonitorFree:200 : mon=0x7f46a4002a30
14:58:32.439: error : qemuConnectMonitor:1577 : Failed to connect monitor for
libvirt-pool-02
14:58:32.439: error : qemudWaitForMonitor:2498 : internal error process exited
while connecting to monitor: qemu: could not open disk image
/rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6:
Permission denied

[root@white-vdse ~]# ls -Z
/rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6
lrwxrwxrwx. vdsm kvm system_u:object_r:default_t:s0  
/rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6
->
/dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6



14:58:32.440: debug : qemudShutdownVMDaemon:4103 : Shutting down VM
'libvirt-pool-02' migrated=0


reproduction: 100% 

packages: 

vdsm-4.9-8.el6.x86_64
qemu-kvm-0.12.1.2-2.69.el6.x86_64
libvirt-0.8.1-7.el6.x86_64
2.6.32-33.el6.x86_64

steps: 

1) create vm on block device (iscsi) 
2) make sure selinux is on enforcing state (/etc/sysconfig/selinux)
3) reboot (after setting selinux) 
4) start vm (either from virsh nor vdsm)

Comment 2 RHEL Program Management 2010-06-07 15:13:18 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 3 Daniel Berrangé 2010-06-08 09:30:40 UTC
Can you show me these two too:

# ls -lZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6
# ls -lZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b


And also please attach the /var/log/audit/audit.log file showing the AVCs that occur.

Comment 4 Haim 2010-06-08 11:25:31 UTC
attached --> this is a new repro though so files are different:

libvirtError: internal error process exited while connecting to monitor: qemu: could not open disk image /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/c6bda33d-1e60-4daa-9bb3-f7e0618e98a1/d563b7e7-6611-4555-8cf7-43b5129ec19d: Permission denied

bash-4.1$ ls -lZ /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/c6bda33d-1e60-4daa-9bb3-f7e0618e98a1/d563b7e7-6611-4555-8cf7-43b5129ec19d
lrwxrwxrwx. vdsm kvm system_u:object_r:default_t:s0   /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/c6bda33d-1e60-4daa-9bb3-f7e0618e98a1/d563b7e7-6611-4555-8cf7-43b5129ec19d -> /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/d563b7e7-6611-4555-8cf7-43b5129ec19d


bash-4.1$ ls -LZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/d563b7e7-6611-4555-8cf7-43b5129ec19d
ls: cannot access /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/d563b7e7-6611-4555-8cf7-43b5129ec19d: No such file or directory

bash-4.1$ ls -lZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b
lrwxrwxrwx. root root system_u:object_r:device_t:s0    ids -> ../dm-15
lrwxrwxrwx. root root system_u:object_r:device_t:s0    inbox -> ../dm-16
lrwxrwxrwx. root root system_u:object_r:device_t:s0    leases -> ../dm-14
lrwxrwxrwx. root root system_u:object_r:device_t:s0    master -> ../dm-18
lrwxrwxrwx. root root system_u:object_r:device_t:s0    metadata -> ../dm-13
lrwxrwxrwx. root root system_u:object_r:device_t:s0    outbox -> ../dm-17
bash-4.1$ 

type=CRED_ACQ msg=audit(1275996141.412:2110): user pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=P
AM:setcred acct="root" exe="/usr/bin/sudo" hostname=white-vdse.eng.lab.tlv.redhat.com addr=10.35.16.205 terminal=? res=success'
type=USER_START msg=audit(1275996141.413:2111): user pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op
=PAM:session_open acct="root" exe="/usr/bin/sudo" hostname=white-vdse.eng.lab.tlv.redhat.com addr=10.35.16.205 terminal=? res=success'
type=USER_END msg=audit(1275996141.413:2112): user pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_close acct="root" exe="/usr/bin/sudo" hostname=white-vdse.eng.lab.tlv.redhat.com addr=10.35.16.205 terminal=? res=success'
type=USER_CMD msg=audit(1275996141.413:2113): user pid=3716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='cwd="/" cmd=2F62696E2F63686F776E207664736D3A6B766D202F6465762F64353864383363352D353030382D343930362D396334342D626239393135343238333237202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F6D65746164617461202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F6C6561736573202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F696473202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F696E626F78202F6465762F64353864383363352D353030382D343930362D396334342D6262393931353432383332372F6F7574626F78 terminal=? res=success'

Comment 5 Daniel Berrangé 2010-06-08 12:07:26 UTC

Sorry, I meant to ask for 'ls -alZ' rather than just 'ls -lZ', so that it shows the directory permission too. Can you show me the directory again:

ls -alZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b


Also is that really the full audit.loog contents ?  There are no 'AVC' lines in what you posted, which makes it unlikely to be an SELinux problem

Comment 6 Haim 2010-06-08 12:38:50 UTC
the log appears under /var/log/messages and not /var/audit/ and looks like this: 


Jun  8 15:27:36 white-vdse kernel: type=1400 audit(1276000056.142:8): avc:  denied  { read } for  pid=3598 comm="qemu-kvm" name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117 scontext=system_u:system_r:qemu_t:s0-s0:c0.c1023 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file
Jun  8 15:27:36 white-vdse kernel: type=1400 audit(1276000056.160:9): avc:  denied  { read } for  pid=3598 comm="qemu-kvm" name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117 scontext=system_u:system_r:qemu_t:s0-s0:c0.c1023 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file


bash-4.1$ ls -alZ /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6 
lrwxrwxrwx. vdsm kvm system_u:object_r:default_t:s0   /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/9f076c05-0eab-414c-b983-b826bb5ee037/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6 -> /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b/97e68bc6-d7ff-40cf-a1a1-db0bf52111f6

bash-4.1$ ls -alZ /dev/d9124e52-d42a-4b0c-8657-523bc5b6733b
drwxr-xr-x. vdsm kvm  system_u:object_r:device_t:s0    .
drwxr-xr-x. root root system_u:object_r:device_t:s0    ..
lrwxrwxrwx. root root system_u:object_r:device_t:s0    ids -> ../dm-15
lrwxrwxrwx. root root system_u:object_r:device_t:s0    inbox -> ../dm-16
lrwxrwxrwx. root root system_u:object_r:device_t:s0    leases -> ../dm-14
lrwxrwxrwx. root root system_u:object_r:device_t:s0    master -> ../dm-18
lrwxrwxrwx. root root system_u:object_r:device_t:s0    metadata -> ../dm-13
lrwxrwxrwx. root root system_u:object_r:device_t:s0    outbox -> ../dm-17

Comment 7 Daniel Berrangé 2010-06-08 13:15:21 UTC
> denied  { read } for  pid=3598 comm="qemu-kvm"
> name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117
> scontext=system_u:system_r:qemu_t:s0-s0:c0.c1023
> . tcontext=system_u:object_r:default_t:s0 tclass=lnk_file

Ok so it appears that '/dev/d9124e52-d42a-4b0c-8657-523bc5b6733b' is not a directory itself, but rather a symlink to a directory. And SELinux appears to be forbidding QEMU permission to follow the symlink. I'm not sure whether this is an SELinux policy bug, or a mistake in labelling somewhere yet.

Comment 8 Daniel Walsh 2010-06-08 13:33:55 UTC
SELinux bug.

Miroslav add

dev_read_generic_symlinks(virt_domain)

to virt.te

and
########################################
## <summary>
##	Read symbolic links in device directories.
## </summary>
## <param name="domain">
##	<summary>
##	Domain allowed access.
##	</summary>
## </param>
#
interface(`dev_read_generic_symlinks',`
	gen_require(`
		type device_t;
	')

	allow $1 device_t:lnk_file read_lnk_file_perms;
')

to devices.if

Comment 9 Daniel Berrangé 2010-06-08 13:50:44 UTC
*** Bug 594410 has been marked as a duplicate of this bug. ***

Comment 10 Miroslav Grepl 2010-06-10 06:38:42 UTC
Fixed in selinux-policy-3.7.19-24.el6.noarch

Comment 11 Haim 2010-06-10 14:59:51 UTC
downloaded rpm manually from brew, installed and tested, still, get same result:

selinux-policy-3.7.19-24.el6.noarch

17:58:03.217: error : qemuConnectMonitor:1577 : Failed to connect monitor for libvirt-pool-03
17:58:03.217: error : qemudWaitForMonitor:2498 : internal error process exited while connecting to monitor: qemu: could not open disk image /rhev/data-center/606d043c-ef9c-4c6f-848b-5bd89325c78d/d9124e52-d42a-4b0c-8657-523bc5b6733b/images/a6431da5-09b5-42b0-8c53-a0f454bc8925/9205859a-bc75-400d-b9c2-7a15d5188c81: Permission denied


Jun 10 17:58:00 white-vdse kernel: type=1400 audit(1276181880.288:4): avc:  denied  { read } for  pid=3430 comm="qemu-kvm" name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117 scontext=system_u:system_r:svirt_t:s0:c195,c370 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file
Jun 10 17:58:00 white-vdse kernel: type=1400 audit(1276181880.302:5): avc:  denied  { read } for  pid=3430 comm="qemu-kvm" name="d9124e52-d42a-4b0c-8657-523bc5b6733b" dev=dm-0 ino=131117 scontext=system_u:system_r:svirt_t:s0:c195,c370 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file
[root@whit

Comment 12 Miroslav Grepl 2010-06-15 14:57:16 UTC
Dan,
it looks we also need

files_read_default_symlinks(virt_domain)

Comment 13 Daniel Walsh 2010-06-15 20:50:33 UTC
No, I think we need to label /rhev?

What are we using this directory for?

Comment 14 Itamar Heim 2010-06-15 21:36:49 UTC
/rhev is where we mount the storage pool, storage domains and the list of disk images (which are in the storage domains).
it can mount nfs domains, or FC/iSCSI based VGs.

Comment 15 Daniel Walsh 2010-06-16 17:18:20 UTC
THen lets label it mnt_t.

chcon -t mnt_t /rhev

Comment 16 Daniel Walsh 2010-06-16 17:20:50 UTC
Miroslav add

/rhev			-d	gen_context(system_u:object_r:mnt_t,s0)

Comment 17 Miroslav Grepl 2010-06-18 08:14:34 UTC
Fixed in selinux-policy-3.7.19-26.el6.

Comment 19 Haim 2010-07-12 12:21:14 UTC
moving back to assignee as this issue failed qa. 
trying to start vm with selinux enabled (running on block device) results with an unexpected error produced by qemu that it has no permission. 

\this bug is failed QA as we still hit the original issue.

trying to start a vm over libvirt & qemu running on iscsi block device with selinux enabled on host results with the following error: 

 File "/usr/share/vdsm/vm.py", line 574, in _execqemu
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 571, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1282, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error process exited while connecting to monitor: qemu: could not open disk image /rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/aaac4a9b-ae1f-4e4b-9c71-d25eb10bc83f/images/8fad46f6-c802-4328-a38a-0564068bdfcc/783df1d0-0485-4585-b0ea-9e3c776b4eb8: Permission denied
Thread-779::ERROR::2010-07-12 14:53:20,521::vm::615::vds.vmlog.57b52f3e-13e3-4388-9743-6d28bd63f9c9::Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 611, in _getQemuError
    for line in file(self.dumpFile).readlines():
IOError: [Errno 2] No such file or directory: '/var/run/vdsm/57b52f3e-13e3-4388-9743-6d28bd63f9c9.stdio.dump'
Thread-779::DEBUG::2010-07-12 14:53:20,521::vm::1662::vds.vmlog.57b52f3e-13e3-4388-9743-6d28bd63f9c9::Changed state to Down: Unexpected Create Error

from Dan's comment, it looks like directories under /rhev should have labelled with 'mnt_t' though are still using old label: 

lrwxrwxrwx. vdsm kvm system_u:object_r:default_t:s0   /rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/aaac4a9b-ae1f-4e4b-9c71-d25eb10bc83f/images/8fad46f6-c802-4328-a38a-0564068bdfcc/783df1d0-0485-4585-b0ea-9e3c776b4eb8 -> /dev/aaac4a9b-ae1f-4e4b-9c71-d25eb10bc83f/783df1d0-0485-4585-b0ea-9e3c776b4eb8

Comment 20 Haim 2010-07-12 13:25:20 UTC
update: 

I didn't use the correct switch (ls -lZd), which shows that /rhev/ has mnt_t context

[root@pele ~]# ls -lZd /rhev/
drwxr-xr-x. root root system_u:object_r:mnt_t:s0       /rhev/

however, new mounts created under this directory on the fly, gets the following context: 

[root@pele ~]# ls -lZd /rhev/data-center/
drwxr-xr-x. vdsm kvm system_u:object_r:default_t:s0   /rhev/data-center/

when I tried to do it manually using chcon using the following command: 

[root@pele ~]# chcon -t mnt_t /rhev/data-center/
[root@pele ~]# ls -lZd /rhev/data-center/

and it looks like security context has changed correctly, though, I still cannot start vms as i get the following AVC: 


type=AVC msg=audit(1278941168.858:117904): avc:  denied  { read } for  pid=26194 comm="qemu-kvm" name="aaac4a9b-ae1f-4e4b-9c71-d25eb10bc83f" dev=dm-0 ino=91395
2 scontext=system_u:system_r:svirt_t:s0:c418,c999 tcontext=system_u:object_r:default_t:s0 tclass=lnk_file
type=SYSCALL msg=audit(1278941168.858:117904): arch=c000003e syscall=2 success=no exit=-13 a0=118f4c0 a1=84002 a2=0 a3=40 items=0 ppid=1 pid=26194 auid=4294967
295 uid=36 gid=36 euid=36 suid=36 fsuid=36 egid=36 sgid=36 fsgid=36 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_
r:svirt_t:s0:c418,c999 key=(null)
type=ANOM_PROMISCUOUS msg=audit(1278941168.924:117905): dev=vnet0 prom=0 old_prom=256 auid=4294967295 uid=36 gid=36 ses=4294967295
type=CRED_ACQ msg=audit(1278941172.439:117906): user pid=26229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:setcred acct
="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success'
type=USER_START msg=audit(1278941172.439:117907): user pid=26229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_op
en acct="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success'
type=USER_END msg=audit(1278941172.439:117908): user pid=26229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_clos
e acct="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success'
type=USER_CMD msg=audit(1278941172.440:117909): user pid=26229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='cwd="/" cmd=2F73626
96E2F697363736961646D202D6D2073657373696F6E202D52 terminal=? res=success'
type=CRED_ACQ msg=audit(1278941172.460:117910): user pid=26230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:setcred acct
="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success'
type=USER_START msg=audit(1278941172.460:117911): user pid=26230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_op
en acct="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success'
type=USER_END msg=audit(1278941172.461:117912): user pid=26230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:session_clos
e acct="root" exe="/usr/bin/sudo" hostname=pele.qa.lab.tlv.redhat.com addr=10.35.65.112 terminal=? res=success'
type=USER_CMD msg=audit(1278941172.461:117913): user pid=26230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='cwd="/" cmd=2F62696
:

Comment 21 Daniel Walsh 2010-07-12 21:33:09 UTC
I guess we need 

/rhev			-d	gen_context(system_u:object_r:mnt_t,s0)
/rhev(/[^/]*)?		-d	gen_context(system_u:object_r:mnt_t,s0)
/rhev/[^/]*/.*			<<none>>

Just like the labels of /mnt

Comment 22 Miroslav Grepl 2010-07-14 14:25:36 UTC
Fixed in selinux-policy-3.7.19-32.el6.noarch

Comment 24 Haim 2010-08-01 14:23:34 UTC
It looks like policy was fixed, and now all mount has the correct LABEL, however, I am still unable to start guests when SELINUX is enforce. 

policy looks as follows: 

[root@infra-vdsa ~]# semanage fcontext -l |grep rhev
/rhev                                              directory          system_u:object_r:mnt_t:s0 
/rhev(/[^/]*)?                                     directory          system_u:object_r:mnt_t:s0 
/rhev/[^/]*/.*                                     all files          <<None>>


when I start the machine, I get the following permission error: 

07:54:20.914: error : qemudWaitForMonitor:2548 : internal error process exited while connecting to monitor: qemu: could not open disk image /rhev/data-center/8
41af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/c8843acb-d2e0-4f62-9233-173f0261cf18/a6957d93-268c-4fd5-9a39-8e4115ad6c6b: Perm
ission denied

permissions: 

[root@infra-vdsa ~]# ls -lZd /rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/c8843acb-d2e0-4f62-9233-173f0261cf18/a6957d93-268c-4fd5-9a39-8e4115ad6c6b 
lrwxrwxrwx. vdsm kvm system_u:object_r:mnt_t:s0       /rhev/data-center/841af73a-d3bf-4bb8-9985-0603fdcf302e/88703353-1968-4875-bdc5-604582582f22/images/c8843acb-d2e0-4f62-9233-173f0261cf18/a6957d93-268c-4fd5-9a39-8e4115ad6c6b -> /dev/88703353-1968-4875-bdc5-604582582f22/a6957d93-268c-4fd5-9a39-8e4115ad6c6b

[root@infra-vdsa ~]# ls -lZd /dev/88703353-1968-4875-bdc5-604582582f22/
drwxr-xr-x. vdsm kvm system_u:object_r:device_t:s0    /dev/88703353-1968-4875-bdc5-604582582f22/

please feel free to contact me in case you want to inspect the machine and configuration. 
also note that I performed reboot after setting SELINUX to enforce (for re-labelling to take affect). 

move to assigned.

Comment 25 Miroslav Grepl 2010-08-02 15:32:25 UTC
Any AVC messages?

Comment 28 Haim 2010-08-08 08:25:11 UTC
type=AVC msg=audit(1281255977.973:1864353): avc:  denied  { read } for  pid=26818 comm="qemu-kvm" name="1a5d692c-db3f-45cd-9f11-34be4fb86b6d" dev=dm-0 ino=2611
73 scontext=system_u:system_r:svirt_t:s0:c266,c992 tcontext=unconfined_u:object_r:mnt_t:s0 tclass=lnk_file
type=SYSCALL msg=audit(1281255977.973:1864353): arch=c000003e syscall=2 success=yes exit=9 a0=2565a10 a1=800 a2=0 a3=0 items=0 ppid=1 pid=26818 auid=0 uid=36 g
id=36 euid=36 suid=36 fsuid=36 egid=36 sgid=36 fsgid=36 tty=(none) ses=3 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c266,c99
2 key=(null)

Comment 29 Haim 2010-08-08 08:35:40 UTC
Created attachment 437419 [details]
full audit.log

Comment 30 Miroslav Grepl 2010-08-09 07:45:58 UTC
Haim,
if you execute

# grep svirt_t audit.log | audit2allow -M mysvirt
# semodule -i mysvirt.pp

Does it work?

Comment 31 Haim 2010-08-09 12:01:13 UTC
yes. it's working. need any logs ?

Comment 32 Miroslav Grepl 2010-08-09 12:47:41 UTC
(In reply to comment #31)
> yes. it's working. need any logs ?    

Ok, thanks for testing. I am fixing it.

Comment 33 Miroslav Grepl 2010-08-10 18:14:05 UTC
Fixed in selinux-policy-3.7.19-37.el6.noarch

Comment 35 Haim 2010-08-13 14:29:59 UTC
verified. steps for verification: 

1) setenforce 1
2) service vdsmd restart (also restart libvirtd)
3) start new guest on host - successful. 

please note that I monitored /var/log/audit/audit.log 
and didn't see any AVC, nor something in /var/log/messages. 

fixed on following versions: 

selinux-policy-targeted-3.7.19-38.el6.noarch
libselinux-utils-2.0.94-1.el6.x86_64
selinux-policy-3.7.19-38.el6.noarch
libselinux-2.0.94-1.el6.x86_64
libselinux-debuginfo-2.0.94-1.el6.x86_64

2.6.32-59.1.el6.x86_64
libvirt-0.8.1-23.el6.x86_64
vdsm-4.9-12.2.x86_64
device-mapper-multipath-0.4.9-25.el6.x86_64
lvm2-2.02.72-4.el6.x86_64
qemu-kvm-0.12.1.2-2.109.el6.x86_64

Comment 36 Haim 2010-08-15 08:57:02 UTC
kept on testing and it seems like it doesn't work on NFS mount point. 

repro steps are quit simple: 

1) work on NFS storage 
2) crate new vm (guest machine)
3) setenforce 1
4) start (virsh create) 

11:46:42.886: info : qemuConnectMonitor:1617 : Failed to connect monitor for nfsvirt-rhel5
11:46:42.886: error : qemudWaitForMonitor:2548 : internal error process exited while connecting t
o monitor: qemu: could not open disk image /rhev/data-center/dff3b690-519b-4c05-b790-0b52837f40c3
/8fbff449-eeeb-478f-b40c-bc4001372902/images/83128bab-8f05-4278-937e-e6141c03bd6f/4ff266a4-7bf6-4
008-ab42-2a54868c924b: Permission denied

------------------------------------------------

[root@silver-vdse ~]# ls -Z /rhev/data-center/dff3b690-519b-4c05-b790-0b52837f40c3/


lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0   250afad0-bed7-4bce-8841-73906e1c3e14 -> /rhev/data-center/mnt/qanashead.qa.lab.tlv.redhat.com:_export_hateya_rhel6.0-data2/250afad0-bed7-4bce-8841-73906e1c3e14
lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0   8fbff449-eeeb-478f-b40c-bc4001372902 -> /rhev/data-center/mnt/qanashead.qa.lab.tlv.redhat.com:_export_hateya_rhel6.0-data1/8fbff449-eeeb-478f-b40c-bc4001372902
lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0   mastersd -> 8fbff449-eeeb-478f-b40c-bc4001372902                                                                                            
lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0   tasks -> mastersd/master/tasks
lrwxrwxrwx. vdsm kvm unconfined_u:object_r:mnt_t:s0   vms -> mastersd/master/vms

------------------------------------------------

[root@silver-vdse ~]# ls -Z /rhev/data-center/dff3b690-519b-4c05-b790-0b52837f40c3/8fbff449-eeeb-478f-b40c-bc4001372902/

drwxr-xr-x. vdsm kvm system_u:object_r:nfs_t:s0       dom_md
drwxr-xr-x. vdsm kvm system_u:object_r:nfs_t:s0       images
drwxr-xr-x. vdsm kvm system_u:object_r:nfs_t:s0       master

Comment 37 Miroslav Grepl 2010-08-16 07:03:06 UTC
Could you execute

setsebool -P virt_use_nfs 1

Does the problem go away? If no, please switch to permissive mode and attach AVC messages which you are seeing.

Comment 38 Haim 2010-08-16 10:18:16 UTC
yes, it goes away,though I still see the following libvirt error: 

8c│13:19:59.558: warning : virDomainDiskDefForeachPath:7654 : Ignoring open failure on /rhev/data-center│
Th│/dff3b690-519b-4c05-b790-0b52837f40c3/250afad0-bed7-4bce-8841-73906e1c3e14/images/858b7445-3ecf-4b8b-│et
ur│ae2e-c8d8a0ba9541/dacc7c3f-967c-4d24-abb8-d3bc621f9c04: Permission denied  

is it related ?

Comment 39 Miroslav Grepl 2010-08-16 10:23:43 UTC
Are you seeing it also in permissive mode?

Comment 40 Daniel Berrangé 2010-08-16 10:36:59 UTC
> 8c│13:19:59.558: warning : virDomainDiskDefForeachPath:7654 : Ignoring open
> failure on /rhev/data-center│
> Th│/dff3b690-519b-4c05-b790-0b52837f40c3/250afad0-bed7-4bce-8841-73906e1c3e14
> /images/858b7445-3ecf-4b8b-│et
> ur│ae2e-c8d8a0ba9541/dacc7c3f-967c-4d24-abb8-d3bc621f9c04: Permission denied  

VDSM creates its directories with a non-root user/group ownership, so if you have root_squash NFS, libvirtd won't be able to open the path. This isn't a problem as long as VDSM has given the correct permissions for QEMU itself to open the path

Comment 42 Haim 2010-08-17 06:29:10 UTC
iscsi part was fixed, waiting for vdsm fix for NFS  part (described in 624432).

Comment 44 Haim 2010-09-07 13:45:02 UTC
this bug can move to verified, as 624432 was fixed by vdsm. manage to run vm under both nfs and iscsi storages with selinux on (enforcing). 

selinux-policy-3.7.19-54.el6.noarch
2.6.32-71.el6.x86_64
libvirt-0.8.1-27.el6.x86_64
vdsm-4.9-14.el6.x86_64
device-mapper-multipath-0.4.9-30.el6.x86_64
lvm2-2.02.72-8.el6.x86_64
qemu-kvm-0.12.1.2-2.113.el6.x86_64
iptables-1.4.7-3.el6.x86_64

Comment 45 releng-rhel@redhat.com 2010-11-10 21:34:34 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.