Bug 1180769 - Security context on image file gets reset
Summary: Security context on image file gets reset
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 7.3
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 912499 1218766
Blocks: 910270 1274917 1288337 1301891 1364088
TreeView+ depends on / blocked
 
Reported: 2015-01-09 22:46 UTC by Ben Woodard
Modified: 2016-11-03 17:49 UTC (History)
20 users (show)

Fixed In Version: libguestfs-1.32.0-2.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 912499
Environment:
Last Closed: 2016-11-03 17:49:04 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2576 normal SHIPPED_LIVE Moderate: libguestfs and virt-p2v security, bug fix, and enhancement update 2016-11-03 12:06:51 UTC

Description Ben Woodard 2015-01-09 22:46:26 UTC
+++ This bug was initially created as a clone of Bug #912499 +++

I've been running into a strange problem on F18 where guests could not write to their virtual storage.  After several days of debugging and finally remember to turn off dontaudit rules with semodule -DB I realized that selinux was failing writes to the image file:

type=SYSCALL msg=audit(1361213332.460:456): arch=c000003e syscall=296 success=yes exit=8257536 a0=e a1=7fd6ce61a930 a2=1d7 a3=0 items=0 ppid=1 pid=4257 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 ses=4294967295 tty=(none) comm="qemu-kvm" exe="/usr/bin/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c630,c868 key=(null)
type=AVC msg=audit(1361213332.460:456): avc:  denied  { write } for  pid=4257 comm="qemu-kvm" path="/var/lib/libvirt/images/foo.img" dev="dm-1" ino=395221 scontext=system_u:system_r:svirt_t:s0:c630,c868 tcontext=system_u:object_r:virt_content_t:s0 tclass=file

Turns out the context on the image file was virt_content_t:

rw-------. qemu qemu system_u:object_r:virt_content_t:s0 /var/lib/libvirt/images/foo.img

Enabling libvirt verbose logging, I found that the security context was indeed getting set properly:

Feb 18 13:25:53 ld93 libvirtd[7897]: 2013-02-18 19:25:53.014+0000: 7902: info : virSecuritySELinuxSetFileconHelper:870 : Setting SELinux context on '/var/lib/libvirt/images/foo.img' to 'system_u:object_r:svirt_image_t:s0:c296,c808'

But then a few seconds later:

Feb 18 13:26:03 ld93 libvirtd[7897]: 2013-02-18 19:26:03.211+0000: 7898: info : virSecuritySELinuxSetFileconHelper:870 : Setting SELinux context on '/var/lib/libvirt/images/foo.img' to 'system_u:object_r:virt_content_t:s0'

So something was resetting the file.  I noticed that in virt-manager immediately after I brought up the guest another entry appeared in the list named guestfs-25zbexg649zpe6bz.  Armed with the knowledge that the context on an image file gets reset to the default when the guest is not running, I thought that perhaps somehow the context on the image file was getting reset when this guestfs-* VM exited.  I did 'yum erase libguestfs', restarted everything and tried to bring up another guest.  The security context stuck around this time.

I'm happy to provide any additional information you request, but at this moment I'm not at all sure what else I can provide.  Before I removed it, I had:

1:libguestfs-tools-1.20.1-3.fc18.x86_64
1:libguestfs-tools-c-1.20.1-3.fc18.x86_64
1:python-libguestfs-1.20.1-3.fc18.x86_64
guestfs-browser-0.2.2-1.fc18.x86_64
1:libguestfs-1.20.1-3.fc18.x86_64

as well as:

libvirt-0.10.2.3-1.fc18.x86_64
virt-manager-0.9.4-4.fc18.noarch

--- Additional comment from Richard W.M. Jones on 2013-02-18 15:17:28 EST ---

One observation: Only the 'python-libguestfs' package needs to
be blocked to stop virt-manager from using libguestfs.

One test you could try in order to prove whether or not
this is caused by libvirt relabelling via libguestfs is:

(1) Install /usr/bin/virt-df; this should not pull in
python-libguestfs.

(2) Run the following command on a running guest.  It
shouldn't disturb the guest (although if it triggers the
bug then it would label the disk which would definitely
be disturbing the guest):

  virt-df -d NameOfTheGuest

--- Additional comment from Jason Tibbitts on 2013-02-18 15:34:37 EST ---

OK, installed /usr/bin/virt-df, then brought up a guest.  The security context was fine and it could write:

[root@ld93 ~]# ls -lZ /var/lib/libvirt/images/foo.img
-rw-------. qemu qemu system_u:object_r:svirt_image_t:s0:c45,c503 /var/lib/libvirt/images/foo.img

Then I ran virt-df -d foo:

[root@ld93 ~]# virt-df -d foo
libguestfs: error: could not create appliance through libvirt: internal error process exited while connecting to monitor: 2013-02-18 20:32:23.664+0000: 1819: debug : virFileClose:72 : Closed fd 26
2013-02-18 20:32:23.664+0000: 1819: debug : virFileClose:72 : Closed fd 31
2013-02-18 20:32:23.665+0000: 1819: debug : virFileClose:72 : Closed fd 3
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/guestfs-4ftwxteo4ol3oev9/
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/cpuset/libvirt/qemu/guestfs-4ftwxteo4ol3oev9/
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/memory/libvirt/qemu/guestfs-4ftwxteo4ol3oev9/
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/devices/libvirt/qemu/guestfs-4ftwxteo4ol3oev9/
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/freezer/libvirt/qemu/guestfs-4ftwxteo4ol [code=1 domain=10]

After this, the security context was reset:

[root@ld93 ~]# ls -lZ /var/lib/libvirt/images/foo.img
-rw-------. qemu qemu system_u:object_r:virt_content_t:s0 /var/lib/libvirt/images/foo.img

and writes to /dev/vda in the guest fail.

--- Additional comment from Richard W.M. Jones on 2013-02-18 15:43:52 EST ---

Dave: I'm pretty sure this is actually a libvirt bug (possibly an RFE).

--- Additional comment from Jason Tibbitts on 2013-02-18 16:59:28 EST ---

Just wanted to note that this actually prevents me from installing any guests in the default setup.  I just brought up an F18 machine and installed the Virtualization group.  To be honest I'm not sure how it works for anyone at all, unless they're disabling selinux or installing a package set that somehow wouldn't pull in python-libguestfs.

--- Additional comment from Dave Allan on 2013-02-18 22:27:04 EST ---

Ok, I can reproduce this behavior just trying to do a purely default install with virt-manager and indeed it did not reproduce until I installed virt-df.

--- Additional comment from Richard W.M. Jones on 2013-02-19 10:13:52 EST ---

I'll note that the workaround for this is to do:

  export LIBGUESTFS_ATTACH_METHOD=appliance

which goes back to the old (pre-F18) method of direct-launching qemu
instead of using libvirt.

--- Additional comment from Daniel Berrange on 2013-02-19 10:22:11 EST ---

If we need to run 2 VMs at once all accessing the same disk, then AFAICT, the only way to make it work from a sVirt POV is to ensure that libguestfs uses the same seclabel as the running guest. If different MCS labels are used, sVirt is always going to block either libguestfs or the real VM.

So basically look at the running guest for:

  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c24,c151</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c24,c151</imagelabel>
  </seclabel>

And then change 'dynamic' to 'static', 'relabel' to 'no' and remove the <imagelabel> . So you get

  <seclabel type='static' model='selinux' relabel='no'>
    <label>system_u:system_r:svirt_t:s0:c24,c151</label>
  </seclabel>

The remaining problem is that if the original guest shuts down while libguesfs is running, the libguestfs VM will get its access to the disks revoked. There's not really anything we can do about that, other than to stop trying to run 2 VMs using the same disks.

--- Additional comment from Richard W.M. Jones on 2013-02-27 11:04:04 EST ---

(In reply to comment #7)
> If we need to run 2 VMs at once all accessing the same disk, then AFAICT,
> the only way to make it work from a sVirt POV is to ensure that libguestfs
> uses the same seclabel as the running guest. If different MCS labels are
> used, sVirt is always going to block either libguestfs or the real VM.
> 
> So basically look at the running guest for:
> 
>   <seclabel type='dynamic' model='selinux' relabel='yes'>
>     <label>system_u:system_r:svirt_t:s0:c24,c151</label>
>     <imagelabel>system_u:object_r:svirt_image_t:s0:c24,c151</imagelabel>
>   </seclabel>
> 
> And then change 'dynamic' to 'static', 'relabel' to 'no' and remove the
> <imagelabel> . So you get
> 
>   <seclabel type='static' model='selinux' relabel='no'>
>     <label>system_u:system_r:svirt_t:s0:c24,c151</label>
>   </seclabel>

I believe this doesn't work.  As outlined above, it won't work because
libvirt will not label the console sockets.  I got this error:

libguestfs: error: could not create appliance through libvirt: internal error process exited while connecting to monitor: qemu-system-x86_64: -chardev socket,id=charserial0,path=/home/rjones/d/libguestfs/tmp/libguestfsStzvbZ/console.sock: Failed to connect to socket: Permission denied

I fixed that by changing the global seclabel to:

 <seclabel type='static' model='selinux' relabel='yes'>
   <label>system_u:system_r:svirt_t:s0:c24,c151</label>
 </seclabel>

Of course now the problem is that it's relabelling the disks, so to get
around that I changed the disk definitions so each one had a local
<seclabel relabel="no"/> as follows:

 <disk device="disk" type="file">
   <source file="/home/rjones/d/libguestfs/tmp/libguestfsRtyvtz/snapshot2">
      <seclabel relabel="no"/>
   </source>
   <target dev="sda" bus="scsi"/>
   <driver name="qemu" type="qcow2"/>
   <address type="drive" controller="0" bus="0" target="0" unit="0"/>
 </disk>

However libvirt still relabels the disks from
system_u:object_r:svirt_image_t:s0:c678,c742 to
system_u:object_r:virt_content_t:s0.

This is possibly a bug in libvirt itself or in the documentation
of libvirt (http://libvirt.org/formatdomain.html#elementsDisks ).

Dave Allen suggested using <shareable/>, although the <seclabel> above
seems closer to my intention.

(Note I'm using libvirt 1.0.2 for testing).

--- Additional comment from Richard W.M. Jones on 2013-02-27 11:08:32 EST ---

<shareable/> doesn't stop relabelling of the disks.

--- Additional comment from Richard W.M. Jones on 2013-02-27 13:06:11 EST ---

Dan points out on the libvirt mailing list that the syntax
should be:

 <disk device="disk" type="file">
   <source file="/home/rjones/d/libguestfs/tmp/libguestfsRtyvtz/snapshot2">
      <seclabel model="selinux" relabel="no"/>
 ...

and indeed that causes libvirt not to relabel the disk.

HOWEVER that's not the end of the story.  The problem now
is that libvirt doesn't relabel the qcow2 overlay, so qemu
can't access it.  What we really want is for libvirt to
relabel the overlay but not the backing disk.

The error is:

libguestfs: error: could not create appliance through libvirt: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=/home/rjones/d/libguestfs/tmp/libguestfsK3I2RN/snapshot2,if=none,id=drive-scsi0-0-0-0,format=qcow2: could not open disk image /home/rjones/d/libguestfs/tmp/libguestfsK3I2RN/snapshot2: Permission denied
 [code=1 domain=10]

I guess I could try to do the labelling from libguestfs (since
libguestfs creates the overlay).

Another solution that would work would be for libvirt to support
the snapshot=on parameter.  The whole reason we're creating
overlays manually here is because libvirt doesn't support that
obvious feature of qemu.

--- Additional comment from Richard W.M. Jones on 2013-02-28 05:58:25 EST ---

Final part of patch set posted:

https://www.redhat.com/archives/libguestfs/2013-February/thread.html#00122

--- Additional comment from Richard W.M. Jones on 2013-02-28 11:09:02 EST ---

Second version of patch series posted:

https://www.redhat.com/archives/libguestfs/2013-February/thread.html#00152

--- Additional comment from Richard W.M. Jones on 2013-03-01 13:17:36 EST ---

These patches are upstream.  The actual number of commits required to
fix this is rather scary:

(in reverse order)

https://github.com/libguestfs/libguestfs/commit/e78a2c5df3c4ec79e22e03ee4994958537f2e8d8
https://github.com/libguestfs/libguestfs/commit/26df366d3bf712a84337c2402f41506f2be6f610
https://github.com/libguestfs/libguestfs/commit/b9ee8baa49afbf8b6d80a42f3a309b660c7b32a5
https://github.com/libguestfs/libguestfs/commit/617eb88c5e66247894fde2aae11bd102889eb85c
https://github.com/libguestfs/libguestfs/commit/a6a703253be9e9c590a49a149c0170f2e46a1eb2
https://github.com/libguestfs/libguestfs/commit/3f1e7f1078ac40a6736b7721cc248f8ed0614f48
https://github.com/libguestfs/libguestfs/commit/93feaa4ae83b72864e7c10e9a388219ad9960123
https://github.com/libguestfs/libguestfs/commit/1ea7752e95a90aa8016d85489c7460b881fc59b0
https://github.com/libguestfs/libguestfs/commit/b6cbd980fb2fe8e43de9e716769cba63cd8d721b
https://github.com/libguestfs/libguestfs/commit/5ff3845d280515ab623d22666c3f5013f095d32a
https://github.com/libguestfs/libguestfs/commit/fe939cf842949f0eda0b6c69cad8d2d6b5b2c3fd
https://github.com/libguestfs/libguestfs/commit/6e3aab2f0c48280e746e90050abf25947159e294
https://github.com/libguestfs/libguestfs/commit/34e77af1bf42a132589901164f29bd992b37589e
https://github.com/libguestfs/libguestfs/commit/76266be549c521e3767a94c07e9ae616826a2568
https://github.com/libguestfs/libguestfs/commit/556e109765d7b6808045965a1eefcb434294f151
https://github.com/libguestfs/libguestfs/commit/4a6c8021b599952b991725043bac5c722635b3f6

As a result, I doubt that a backport to Fedora 18 / libguestfs 1.20
is going to be possible at this time.  This is Fedora 19 material.

Note that the following workaround is available for Fedora 18
users who encounter this problem:

  export LIBGUESTFS_ATTACH_METHOD=appliance

--- Additional comment from Robert Brown on 2013-05-12 04:14:54 EDT ---

I'm using Fedora 18.

This bug crashes virt-manager when it attempts to call the code to perform an inspection. In my case it happens regardless of setting LIBGUESTFS_ATTACH_METHOD.

It is also used by Openstack nova, and specifically /usr/lib/python2.7/site-packages/nova/virt/disk/vfs/guestfs.py. Same scenario - new instances will crash the nova compute service unless you manually the inspection code out of the way.

It's not obvious at all, and if this is not going to be fixed in F18 I would recommend considering patches for both virt-manager and nova to stop them from tripping the bug by asking to inspect an image.

--- Additional comment from Richard W.M. Jones on 2013-05-12 04:49:11 EDT ---

(In reply to comment #14)
> I'm using Fedora 18.
> 
> This bug crashes virt-manager when it attempts to call the code to perform
> an inspection. In my case it happens regardless of setting
> LIBGUESTFS_ATTACH_METHOD.

You must not be setting LIBGUESTFS_ATTACH_METHOD in the right
place, or else you're seeing a different bug.

With LIBGUESTFS_ATTACH_METHOD=appliance, libvirt, SELinux & sVirt are
not involved at all and you would not see this bug.

Also, even if you hit the bug, virt-manager won't crash, it'll just
fail to inspect the guest.

Comment 1 Ben Woodard 2015-01-09 22:48:12 UTC
Jim Foraker at LLNL is having this exact problem on RHEL7.

Comment 2 Ben Woodard 2015-01-09 22:51:47 UTC
 libguestfs-1.22.6-22.el7.x86_64

Comment 4 Hu Zhang 2015-01-13 11:18:56 UTC
Reproduced this case.

Test steps:
1. # ls -lZ rhel7.img
   -rw-------. root root system_u:object_r:admin_home_t:s0 rhel7.img
2. # guestfish --rw -a rhel7.img
   ><fs> get-backend
   libvirt
   ><fs> run
   ><fs> mount /dev/sda1 /
3. # ls -lZ rhel7.img
   -rw-------. qemu qemu system_u:object_r:svirt_image_t:s0:c230,c866 rhel7.img
4. # virt-df -a rhel7.img
   Filesystem                           1K-blocks       Used  Available  Use%
   rhel7.img:/dev/sda1                     508588      95524     413064   19%
   rhel7.img:/dev/rhel/root               7022592    2984968    4037624   43%
Comments: virt-df can get the filesystem of the image.
5. # ls -lZ rhel7.img
   -rw-------. qemu qemu system_u:object_r:virt_content_t:s0 rhel7.img
Comments: However libvirt relabels the disks from
system_u:object_r:svirt_image_t:s0:c678,c742 to
system_u:object_r:virt_content_t:s0.
6. ><fs> touch /test
Comments: create the file test successfully.
7. ><fs> umount /
8. ><fs> mount /dev/sda1 /
   libguestfs: error: mount: /dev/sda1 on / (options: ''): mount: /dev/sda1: can't read superblock
9. ls -lZ /tmp/libguestfsXINrGS/overlay1
   -rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0 overlay1

In step5, after run "virt-df -a rhel7.img", the libvirt relabels the disks from
system_u:object_r:svirt_image_t:s0:c678,c742 to system_u:object_r:virt_content_t:s0.
In step6, The problem is here. before umount, we can write to the image. However, it fails to mount /dev/sda1 again after unmount it. Is it caused by the change of the label?
In step9, the overlay1's label does not change.

Comment 5 Hu Zhang 2015-01-13 11:53:46 UTC
(In reply to Hu Zhang from comment #4)
> Reproduced this case.
> 
> Test steps:
> 1. # ls -lZ rhel7.img
>    -rw-------. root root system_u:object_r:admin_home_t:s0 rhel7.img
> 2. # guestfish --rw -a rhel7.img
>    ><fs> get-backend
>    libvirt
>    ><fs> run
>    ><fs> mount /dev/sda1 /
> 3. # ls -lZ rhel7.img
>    -rw-------. qemu qemu system_u:object_r:svirt_image_t:s0:c230,c866
> rhel7.img
> 4. # virt-df -a rhel7.img
>    Filesystem                           1K-blocks       Used  Available  Use%
>    rhel7.img:/dev/sda1                     508588      95524     413064   19%
>    rhel7.img:/dev/rhel/root               7022592    2984968    4037624   43%
> Comments: virt-df can get the filesystem of the image.
> 5. # ls -lZ rhel7.img
>    -rw-------. qemu qemu system_u:object_r:virt_content_t:s0 rhel7.img
> Comments: However libvirt relabels the disks from
> system_u:object_r:svirt_image_t:s0:c678,c742 to
> system_u:object_r:virt_content_t:s0.
> 6. ><fs> touch /test
> Comments: create the file test successfully.
> 7. ><fs> umount /
> 8. ><fs> mount /dev/sda1 /
>    libguestfs: error: mount: /dev/sda1 on / (options: ''): mount: /dev/sda1:
> can't read superblock
> 9. ls -lZ /tmp/libguestfsXINrGS/overlay1
>    -rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0 overlay1
> 
> In step5, after run "virt-df -a rhel7.img", the libvirt relabels the disks
> from
> system_u:object_r:svirt_image_t:s0:c678,c742 to
> system_u:object_r:virt_content_t:s0.
> In step6, The problem is here. before umount, we can write to the image.
> However, it fails to mount /dev/sda1 again after unmount it.
> In step9, the overlay1's label does not change.

The symptom is not exactly the same as the description. What's the symptom you met?

Comment 6 Ben Woodard 2015-01-13 15:32:40 UTC
My personal notes are kind of sketchy due to the fact that the initial discussion was over the phone where we mapped out a plan of attack to begin to get a handle on it then within about 10min he came back to me with:

14:20 foraker: Hrm, I think this is what I'm seeing: https://bugzilla.redhat.com/show_bug.cgi?id=912499
14:28 foraker: The more concice explanation: https://lists.fedoraproject.org/pipermail/virt/2013-February/003592.html
14:28 foraker: And indeed, setting LIBGUESTFS_ATTACH_METHOD=appliance makes the problem go away.

According to what I remember, when he ran virt-manager he briefly would see a libguestfs flicker in the virt-manager and then right after it went away, all the running VMs would start report errors that indicated that they couldn't read their disk images.

Comment 9 Richard W.M. Jones 2015-09-23 14:54:49 UTC
The rebase in RHEL 7.3 should fix this one.

Comment 12 Xianghua Chen 2016-07-08 10:16:00 UTC
Hi, Rich 
I still have some question about this bug, could you please check for me? this is my verify steps:
1. The original selinux context:
# ls -lZ RHEL-Server-7.2-64-hvm.raw
-rw-r--r--. root root unconfined_u:object_r:user_home_t:s0 RHEL-Server-7.2-64-hvm.raw

2. Launch the guest image and Ctrl+z it, then check the selinux context.
# guestfish -a RHEL-Server-7.2-64-hvm.raw
><fs> run
><fs> mount /dev/sda1 /
><fs> ^Z
[1]+  Stopped                 guestfish -a RHEL-Server-7.2-64-hvm.raw
# ls -lZ RHEL-Server-7.2-64-hvm.raw
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c9,c86 RHEL-Server-7.2-64-hvm.raw

3. Use virt-df and check the selinux context:
# virt-df -a RHEL-Server-7.2-64-hvm.raw
# ls -lZ RHEL-Server-7.2-64-hvm.raw
-rw-r--r--. qemu qemu system_u:object_r:virt_content_t:s0 RHEL-Server-7.2-64-hvm.raw

4. # fg
guestfish -a RHEL-Server-7.2-64-hvm.raw
><fs> touch /test
><fs> umount /
><fs> mount /dev/sda1 /
><fs> touch /test1
><fs> ^Z

5. Check overlay1
# ls -lZ /tmp/libguestfsHTgxL3/overlay1 
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0 /tmp/libguestfsHTgxL3/overlay1

My question is:
1. In step 3, I learn from the above comments and patch that you told libvirt not to relabel the image, but here it still has relabeled the guest from "svirt_image_t:s0:c9,c86" to "virt_content_t:s0" , so is this a problem or I was wrong?
2. If it's ok for libvirt to relabel the context, then what have you done to fix the errors occurred in step 4 before (failed to touch /test or failed to mount again)?

Comment 13 Richard W.M. Jones 2016-07-08 10:28:22 UTC
Since you used virt-df -a option, we didn't know that libvirt was
already using the image and we didn't tell libvirt not to relabel it.

If you had used the -d option instead then the image shouldn't have
been relabelled.

The relevant function used with the -d option only is:

https://github.com/libguestfs/libguestfs/blob/master/src/libvirt-domain.c#L180-L319

Nothing in this bug tries to fix the -a option case.

Comment 14 Xianghua Chen 2016-07-12 03:45:27 UTC
Thanks for Rich's reply.

Verified with the packages:
libguestfs-1.32.5-10.el7.x86_64

Verify steps:
1. Start a RHEL guest image:
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 15    rhel7.2-20160711               running

2. Check the selinux context:
# ls -lZ RHEL-Server-7.2-64-hvm.raw
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c784,c1001 RHEL-Server-7.2-64-hvm.raw
3.
# virt-df -d rhel7.2-20160711
Filesystem                           1K-blocks       Used  Available  Use%
rhel7.2-20160711:/dev/sda1              508588      93608     414980   19%
rhel7.2-20160711:/dev/rhel_dhcp-10-28/root
                                       6981632    2150148    4831484   31%
4. Check the selinux context again:
# ls -lZ RHEL-Server-7.2-64-hvm.raw
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c784,c1001 RHEL-Server-7.2-64-hvm.raw

***Looks good, the selinux context has not been modified by libvirt.

5. Start guest image using guestfish and check the selinux context of overlay1.
# guestfish -d rhel7.2-20160711 --ro
><fs> run
><fs> mount /dev/sda1 /
><fs> ^Z
[1]+  Stopped                 guestfish -d rhel7.2-20160711 --ro
# ls -lZ RHEL-Server-7.2-64-hvm.raw
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c784,c1001 RHEL-Server-7.2-64-hvm.raw
# ls -lZ /tmp/libguestfsb4Y6Xs/overlay1
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c784,c1001 /tmp/libguestfsb4Y6Xs/overlay1

***The selinux  context of guest image and overlay1 are all right.


So verified.

Comment 16 errata-xmlrpc 2016-11-03 17:49:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2576.html


Note You need to log in before you can comment on or make changes to this bug.