Description of problem: Already running VMs fail to live migrate to hosts where SELinux has been enabled after being previously disabled. Stopping all VMs, putting all hosts into maintenance mode and enabling SELinux on all hosts results in VMs failing to start. Version-Release number of selected component (if applicable): vdsm-4.16.26-1.el7ev How reproducible: Always Steps to Reproduce: 1. Build a RHEV environment with SELinux disabled in all hosts (provision hosts with SELinux disabled, then add those hosts to a new cluster/storage domain). 2. Create, provision and run some VMs. 3. Enable SELinux on some / all of the hosts, re-label (touch /.autorelabel followed by a reboot) and activate the host(s) back. Actual results: VMs already running in other hosts will refuse to live migrate to the host where SELinux has been enabled. Stopped VMs will fail to start in hosts where SELinux has been enabled: Oct 5 20:19:32 hostname journal: vdsm vm.Vm ERROR vmId=`6a905e8d-9fc6-4f8c-b726-120d6478efee`::The vm start process failed [...] libvirtError: internal error: process exited while connecting to monitor: 2015-10-05T19:19:32.089684Z qemu-kvm: -drive file=/rhev/data-center/aaba8d13-dfbf-4d54-978c-483846e4549f/ad150646-5a3a-49aa-b5ec-1ddf8ff78a3d/images/3e530cae-d3f7-4680-a4f8-336a9c9f45d9/4d72e9a4-cd8b-4c93-9888-8eead43f5f84,if=none,id=drive-virtio-disk0,format=raw,serial=3e530cae-d3f7-4680-a4f8-336a9c9f45d9,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/aaba8d13-dfbf-4d54-978c-483846e4549f/ad150646-5a3a-49aa-b5ec-1ddf8ff78a3d/images/3e530cae-d3f7-4680-a4f8-336a9c9f45d9/4d72e9a4-cd8b-4c93-9888-8eead43f5f84: Could not open '/rhev/data-center/aaba8d13-dfbf-4d54-978c-483846e4549f/ad150646-5a3a-49aa-b5ec-1ddf8ff78a3d/images/3e530cae-d3f7-4680-a4f8-336a9c9f45d9/4d72e9a4-cd8b-4c93-9888-8eead43f5f84': Permission denied Expected results: VMs can successfully live migrate and start in hosts where SELinux has been enabled after being disabled. Additional info: The contents of /rhev show that most entries there are labeled with "unlabeled_t" type, see attachment with output of "ls -lRZ /rhev" from SPM host. Current file context mapping definitions ... # semanage fcontext -l | grep ^/rhev /rhev directory system_u:object_r:mnt_t:s0 /rhev(/[^/]*)? directory system_u:object_r:mnt_t:s0 /rhev/[^/]*/.* all files <<None>> <--- ... don't fix the labelling for the contents of /rhev when running "restorecon -Rv /rhev" on the SPM host: [root@hostname ~]# restorecon -Rv /rhev restorecon: Warning no default label for /rhev/data-center/mnt restorecon: Warning no default label for /rhev/data-center/mnt/blockSD restorecon: Warning no default label for /rhev/data-center/mnt/blockSD/ad150646-5a3a-49aa-b5ec-1ddf8ff78a3d restorecon: Warning no default label for /rhev/data-center/mnt/blockSD/ad150646-5a3a-49aa-b5ec-1ddf8ff78a3d/images restorecon: Warning no default label for /rhev/data-center/mnt/blockSD/ad150646-5a3a-49aa-b5ec-1ddf8ff78a3d/images/b35a505a-e82b-4ad5-8ace-b4aafb322080 restorecon: Warning no default label for /rhev/data-center/mnt/blockSD/ad150646-5a3a-49aa-b5ec-1ddf8ff78a3d/images/eca16e41-04b5-4d42-a403-8254a7701d37 [...]
*** This bug has been marked as a duplicate of bug 1271573 ***