Description of problem: if one creates a volume in a pool that is an existing VG, the SELinux context is not set correctly Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. open virt-manager 2. have a storage pool that is an existing Volume Group with free Physical Extents (VG_x60_internal in my case) 3. create a new volume (eg. KVM_test) in that pool Actual results: # ls -lZ /dev/mapper/VG_x60_internal-KVM_test brw------- root root system_u:object_r:fixed_disk_device_t:s0 /dev/mapper/VG_x60_internal-KVM_test Expected results: system_u:object_r:virt_image_t:s0 Additional info: I do understand that this is a difficult case as my existing VG also has LVs for / and swap and we definitely do not want to change the context of these, but new volumes created in the GUI should be labelled with the correct context.
oops, forgot the versions: libvirt-0.4.6-3.fc10.x86_64 virt-manager-0.6.0-3.fc10.x86_64
I have a different scenario, with the same result. In rawhide pre-f11, Filesystem Directory pools, whether or not the directory exists or is created by virt-manager, are not given "virt_image_t". Startup of VM's using image files created in the pool then fail do to AVC devials. libvirt-0.6.0-4.fc11.x86_64 libvirt-python-0.6.0-4.fc11.x86_64 python-virtinst-0.400.1-1.fc11.noarch virt-manager-0.6.1-2.fc11.x86_64 Should this be a seperate BZ?
I think this is similar to bug #491245 and should be fixed in rawhide. Please re-open if not *** This bug has been marked as a duplicate of bug 491245 ***
It would seem this is not fixed. qemu-0.10-8.fc11.x86_64 libvirt-0.6.2-2.fc11.x86_64 virt-manager-0.7.0-4.fc11.x86_64 created a volume as per initial description. # ls -lZ /dev/mapper/vg_bcblade02-KVM_test brw-------. root root system_u:object_r:fixed_disk_device_t:s0 /dev/mapper/vg_bcblade02-KVM_test
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle. Changing version to '11'. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Creating LVM volumes with a particular label while nice, shouldn't really impact running of guests using the volume. Whenever you start a KVM guest in Fedora 11, libvirt will automatically set the correct label on all disks. So what actual problem is this lack of labelling causing you ?
my bad, should have CLOSED CURRENTRELEASE this. with F11 qemu-kvm-0.10.5-3.fc11.x86_64 libvirt-0.6.2-13.fc11.x86_64 virt-manager-0.7.0-5.fc11.x86_64 I can now create LVs in the virt-manager GUI and use them. Previously the wrong labelling was preventing use of the freshly created LVs.