Bug 1101534 - [RFE] provide additional audit information when SELinux policy forbids file access
Summary: [RFE] provide additional audit information when SELinux policy forbids file a...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.0
Hardware: x86_64
OS: All
unspecified
medium
Target Milestone: rc
: 8.0
Assignee: Virtualization Maintenance
QA Contact: yafu
URL:
Whiteboard:
Depends On:
Blocks: TRACKER-bugs-affecting-libguestfs
TreeView+ depends on / blocked
 
Reported: 2014-05-27 12:46 UTC by Vadim Rutkovsky
Modified: 2020-02-11 12:58 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Cause: Consequence: Fix: Result:
Clone Of:
Environment:
Last Closed: 2020-02-11 12:58:49 UTC
Type: Feature Request
Target Upstream Version:


Attachments (Terms of Use)

Description Vadim Rutkovsky 2014-05-27 12:46:12 UTC
Description of problem:
when mounting an image I get "Permission denied" error while all the permissions are seem to be set correctly and "libguestfs-test-tool" test passes

Version-Release number of selected component (if applicable):
libguestfs-1.22.6-22.el7.x86_64

Steps to Reproduce:
[cloud-user@continuous-vrutkovs gnome-continuous]$ ls -laZ /data/gnome-continuous/images/current/gnome-continuous-x86_64-runtime.qcow2-rw-r--r--. cloud-user cloud-user system_u:object_r:virt_content_t:s0 /data/gnome-continuous/images/current/gnome-continuous-x86_64-runtime.qcow2
[cloud-user@continuous-vrutkovs gnome-continuous]$ guestmount -o allow_root --pid-file /data/gnome-continuous/local/smoketest/mnt.guestmount-pid -a /data/gnome-continuous/local/smoketest/work-gnome-continuous-x86_64-runtime/testoverlay-gnome-continuous-x86_64-runtime.qcow2 --rw -m /dev/sda3 -m /dev/sda1:/boot /data/gnome-continuous/local/smoketest/mnt -v
libguestfs: command: run: \ -f checksum
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ x86_64
supermin helper [00000ms] whitelist = (not specified), host_cpu = x86_64, kernel = (null), initrd = (null), appliance = (null)
supermin helper [00000ms] inputs[0] = /usr/lib64/guestfs/supermin.d
checking modpath /lib/modules/3.10.0-121.el7.x86_64 is a directory
checking modpath /lib/modules/3.10.0-123.el7.x86_64 is a directory
picked vmlinuz-3.10.0-123.el7.x86_64
supermin helper [00000ms] finished creating kernel
supermin helper [00000ms] visiting /usr/lib64/guestfs/supermin.d
supermin helper [00000ms] visiting /usr/lib64/guestfs/supermin.d/base.img.gz
supermin helper [00000ms] visiting /usr/lib64/guestfs/supermin.d/daemon.img.gz
supermin helper [00000ms] visiting /usr/lib64/guestfs/supermin.d/hostfiles
supermin helper [00047ms] visiting /usr/lib64/guestfs/supermin.d/init.img
supermin helper [00047ms] visiting /usr/lib64/guestfs/supermin.d/udev-rules.img
supermin helper [00047ms] adding kernel modules
supermin helper [00094ms] finished creating appliance
libguestfs: checksum of existing appliance: 1ba22a24b52b7fd692faf8fbad23a7bebe4d9ed9fd1da6078b62149773761247
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -b /var/tmp/.guestfs-1000/root.12569
libguestfs: command: run: \ -o backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsJXCxjt/snapshot1
Formatting '/tmp/libguestfsJXCxjt/snapshot1', fmt=qcow2 size=4294967296 backing_file='/var/tmp/.guestfs-1000/root.12569' backing_fmt='raw' encryption=off cluster_size=65536 lazy_refcounts=off 
libguestfs: [15875ms] create libvirt XML
libguestfs: command: run: dmesg | grep -Eoh 'lpj=[[:digit:]]+'
libguestfs: read_lpj_from_dmesg: calculated lpj=2000070
libguestfs: command: run: qemu-img
libguestfs: command: run: \ --help
libguestfs: which_parser: g->qemu_img_info_parser = 1
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /dev/fd/9
libguestfs: parse_json: qemu-img info JSON output:\n{\n    "virtual-size": 8589934592, \n    "filename": "/dev/fd/9", \n    "cluster-size": 65536, \n    "format": "qcow2", \n    "actual-size": 200704, \n    "format-specific": {\n        "type": "qcow2", \n        "data": {\n            "compat": "1.1", \n            "lazy-refcounts": false\n        }\n    }, \n    "backing-filename": "/data/gnome-continuous/images/current/gnome-continuous-x86_64-runtime.qcow2", \n    "dirty-flag": false\n}\n\n
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-9vzx36as6919spll</name>\n  <memory unit="MiB">500</memory>\n  <currentMemory unit="MiB">500</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="kvmclock" present="yes"/>\n  </clock>\n  <os>\n    <type>hvm</type>\n    <kernel>/var/tmp/.guestfs-1000/kernel.12569</kernel>\n    <initrd>/var/tmp/.guestfs-1000/initrd.12569</initrd>\n    <cmdline>panic=1 console=ttyS0 udevtimeout=600 no_timer_check lpj=2000070 acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=screen</cmdline>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/data/gnome-continuous/local/smoketest/work-gnome-continuous-x86_64-runtime/testoverlay-gnome-continuous-x86_64-runtime.qcow2"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="writeback"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsJXCxjt/snapshot1"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n      <shareable/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfsJXCxjt/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfsJXCxjt/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /var/tmp/.guestfs-1000
libguestfs: drwxr-xr-x. cloud-user cloud-user unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root       root       system_u:object_r:tmp_t:s0       ..
libguestfs: -rwxr-xr-x. cloud-user cloud-user unconfined_u:object_r:user_tmp_t:s0 checksum
libguestfs: -rw-r--r--. cloud-user cloud-user system_u:object_r:virt_content_t:s0 initrd
libguestfs: -rw-r--r--. cloud-user cloud-user system_u:object_r:virt_content_t:s0 initrd.12569
libguestfs: -rw-r--r--. cloud-user cloud-user system_u:object_r:virt_content_t:s0 kernel
libguestfs: -rw-r--r--. cloud-user cloud-user system_u:object_r:virt_content_t:s0 kernel.12569
libguestfs: -rw-r--r--. cloud-user cloud-user system_u:object_r:virt_content_t:s0 root
libguestfs: -rw-r--r--. cloud-user cloud-user system_u:object_r:virt_content_t:s0 root.12569
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfsJXCxjt
libguestfs: drwxr-xr-x. cloud-user cloud-user unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root       root       system_u:object_r:tmp_t:s0       ..
libguestfs: srwxrwxr-x. cloud-user cloud-user unconfined_u:object_r:user_tmp_t:s0 console.sock
libguestfs: srwxrwxr-x. cloud-user cloud-user unconfined_u:object_r:user_tmp_t:s0 guestfsd.sock
libguestfs: -rw-r--r--. cloud-user cloud-user unconfined_u:object_r:user_tmp_t:s0 snapshot1
libguestfs: -rwxrwxr-x. cloud-user cloud-user unconfined_u:object_r:user_tmp_t:s0 umask-check
libguestfs: [15920ms] launch libvirt guest
libguestfs: error: could not create appliance through libvirt: internal error: process exited while connecting to monitor: qemu-system-x86_64: -drive file=/data/gnome-continuous/local/smoketest/work-gnome-continuous-x86_64-runtime/testoverlay-gnome-continuous-x86_64-runtime.qcow2,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=writeback: could not open disk image /data/gnome-continuous/local/smoketest/work-gnome-continuous-x86_64-runtime/testoverlay-gnome-continuous-x86_64-runtime.qcow2: Could not open backing file: Could not open '/data/gnome-continuous/images/current/gnome-continuous-x86_64-runtime.qcow2': Permission denied
 [code=1 domain=10]
libguestfs: closing guestfs handle 0x7f289957d270 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsJXCxjt

Comment 1 Richard W.M. Jones 2014-05-27 12:55:31 UTC
The error is:

  Could not open backing file: Could not open '/data/gnome-continuous/images/current/gnome-continuous-x86_64-runtime.qcow2': Permission denied

You've shown the permissions on this file, which look fine:

$ ls -laZ /data/gnome-continuous/images/current/gnome-continuous-x86_64-runtime.qcow2
-rw-r--r--. cloud-user cloud-user system_u:object_r:virt_content_t:s0 /data/gnome-continuous/images/current/gnome-continuous-x86_64-runtime.qcow2

You'll have to check all the parent directories too:

ls -ldZ /data/gnome-continuous/images/current
ls -ldZ /data/gnome-continuous/images
[etc]

Also (unfortunately) qemu likes to open disk images with O_RDWR
even if it is only going to read them, so you probably need to check
for write permission too, at least on the enclosing directory.

Comment 2 Vadim Rutkovsky 2014-05-27 13:17:00 UTC
[cloud-user@continuous-vrutkovs gnome-continuous]$ ls -ldZ /data/gnome-continuous/images/current
lrwxrwxrwx. cloud-user cloud-user unconfined_u:object_r:httpd_sys_content_t:s0 /data/gnome-continuous/images/current -> 20140526.13
[cloud-user@continuous-vrutkovs gnome-continuous]$ ls -ldZ /data/gnome-continuous/images
drwxrwxrwx. cloud-user cloud-user unconfined_u:object_r:httpd_sys_content_t:s0 /data/gnome-continuous/images
[cloud-user@continuous-vrutkovs gnome-continuous]$ ls -ldZ /data/gnome-continuous/
drwxrwxr-x. cloud-user cloud-user unconfined_u:object_r:httpd_sys_content_t:s0 /data/gnome-continuous/
[cloud-user@continuous-vrutkovs gnome-continuous]$ ls -ldZ /data
drwxr-xr-x. cloud-user cloud-user system_u:object_r:httpd_sys_content_t:s0 /data
[cloud-user@continuous-vrutkovs gnome-continuous]$ ls -ldZ /data/gnome-continuous/images/20140526.13
drwxrwxr-x. cloud-user cloud-user unconfined_u:object_r:httpd_sys_content_t:s0 /data/gnome-continuous/images/20140526.13

chmodding a+w to dirs dirs didn't help. Is it possible that it happens due to the fact that '/images/current' is a symlink?

Comment 3 Vadim Rutkovsky 2014-05-27 13:19:16 UTC
Oops, my fault, adding a proper SELinux label fixed it:
sudo chcon -v -R --type=virt_content_t /data/gnome-continuous/images/

Comment 4 Richard W.M. Jones 2014-05-27 13:26:33 UTC
(In reply to Vadim Rutkovsky from comment #3)
> Oops, my fault, adding a proper SELinux label fixed it:
> sudo chcon -v -R --type=virt_content_t /data/gnome-continuous/images/

Hmm, this is yet another catch with sVirt labelling.

Shouldn't libvirt be labelling this directory?

Comment 5 Vadim Rutkovsky 2014-05-27 13:40:56 UTC
>Shouldn't libvirt be labelling this directory?
FYI I created it manually, as for gnome-continuous I create the dir manually and ran the commands straight out of it, without updating any libvirtd config

Comment 6 Richard W.M. Jones 2014-05-27 13:47:04 UTC
Libvirt labels everything that qemu will touch, in order for
sVirt (SELinux) to work.  However I don't know if it labels
parent directories of objects (or if it doesn't but should).
Let's see what the response is from the libvirt team.

Comment 7 Jiri Denemark 2014-05-28 08:23:51 UTC
Libvirt does not label parent directories (except for those it creates itself) and I think that's the right way. Default paths are labeled automatically by the default selinux-policy but labeling non-default paths is up to the user who decides to use such paths. The same applies across the whole system, if anyone wants to store, e.g., web pages in a non-default path, they have to label that path too.

Comment 8 Dave Allan 2014-05-28 14:51:51 UTC
I'm going to reopen this BZ as an RFE to provide additional audit information where possible and appropriate (i.e., no information leakage) to enable easier debugging of this situation.

Comment 9 Daniel Berrangé 2014-05-28 15:01:16 UTC
FYI one idea was to try and involve the systemd journal. Every QEMU guest now has a transient systemd machine unit associated with it. IIUC, kernel AVC audit messages generated by a process should be associated with the systemd unit containing the process.  IOW, the systemd machine unit holding QEMU should be associated with any AVCs QEMU causes. So libvirt could query the journald to find out if QEMU created AVCs. This could be used to improve libvirts error message to the user, or perhaps add a libvirt public API which an app can use to extract the log message(s) and report them to the user, along with the current plain error message.

Comment 11 Martin Kletzander 2015-02-17 09:37:36 UTC
(In reply to Daniel Berrange from comment #9)
When should this happen?  When the machine quits (w or w/o errors)?   I assume sd_journal_open() would help with that and we would append the data to the domain's logfile or to error mesage, right?

Comment 12 Daniel Berrangé 2015-02-17 09:44:45 UTC
If the machine quits with an error at startup, it would be desirable to try and get the error message libvirt raises augmented with AVCs (if selinux is enforcing).  I wouldn't append them to the logfile though - that'd just be copying from the structured journal log to the unstructed qemu log which seems like a backwards step to me. I think it is probably about time we adding an API for reading the VM logs eg virDomainOpenLog(virDomainPtr dom, unsigned int flags) and have two flags VIR_DOMAIN_LOG_EMULATOR, VIR_DOMAIN_LOG_KERNEL (or _HYPERVISOR perhaps ? ) to switch betweeen returning data from /var/log/libvirt/qemu/$GUEST.log and from systemd journal.

Comment 16 Jaroslav Suchanek 2020-02-11 12:58:49 UTC
This bug was closed deferred as a result of bug triage.

Please reopen if you disagree and provide justification why this bug should
get enough priority. Most important would be information about impact on
customer or layered product. Please indicate requested target release.


Note You need to log in before you can comment on or make changes to this bug.