RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1218603 - libvirt must audit resource information about ivshmem
Summary: libvirt must audit resource information about ivshmem
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Martin Kletzander
QA Contact: yalzhang@redhat.com
URL:
Whiteboard:
Depends On: 1347049
Blocks: 1111101 1389013
TreeView+ depends on / blocked
 
Reported: 2015-05-05 11:40 UTC by Martin Kletzander
Modified: 2017-08-02 01:25 UTC (History)
16 users (show)

Fixed In Version: libvirt-3.2.0-9.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1389013 (view as bug list)
Environment:
Last Closed: 2017-08-01 17:06:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1846 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2017-08-01 18:02:50 UTC

Description Martin Kletzander 2015-05-05 11:40:09 UTC
Description of problem:
Libvirt leaves creation of shm segment to qemu and does not provide auditing of shared memory resources.

Version-Release number of selected component (if applicable):
libvirt-1.2.15

How reproducible:
100%

Steps to Reproduce:
1. Make sure auditing is enabled (default for RHEL)
2. Create domain with Inter-VM Shared memory
3. Search for the audit message (e.g. ausearch -ts today -m VIRT_RESOURCE | grep '\bold-smem=')

Actual results:
No audit message found

Expected results:
Something along the lines of:

type=VIRT_RESOURCE msg=audit(1430818368.900:261): pid=594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=qemu resrc=mem reason=start vm="dummy" uuid=a19b2370-a7e8-4d65-82a1-53c0ccfe873c old-smem=0 new-smem=1048576 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'

Additional info:
Libvirt needs to make sure the shared memory exists before passing that to QEMU.  Security labels must be set and the information that a domain is starting with ivshmem must be audited.  There should also be a warning if the shm segment exists already!

Comment 1 Marc-Andre Lureau 2015-07-10 16:16:40 UTC
(In reply to Martin Kletzander from comment #0)
> Expected results:
> Something along the lines of:
> 
> type=VIRT_RESOURCE msg=audit(1430818368.900:261): pid=594 uid=0
> auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023
> msg='virt=qemu resrc=mem reason=start vm="dummy"
> uuid=a19b2370-a7e8-4d65-82a1-53c0ccfe873c old-smem=0 new-smem=1048576
> exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'

I sent a RFC libvirt patch for that:
https://www.redhat.com/archives/libvir-list/2015-July/msg00496.html

> Additional info:

I am trying to understand the security requirements, although I think the discussion below is out of topic for this "audit log" bug.

> Libvirt needs to make sure the shared memory exists before passing that to

Why should libvirt make sure the shared memory exists? After all, libvirtd doesn't make sure that mem is available. If qemu fails to start because memory is lacking or ivshmem can't be created, that's enough no?

> QEMU.  Security labels must be set and the information that a domain is

security labels are set automatically (i think because rule type_transition svirt_t tmpfs_t : file svirt_tmpfs_t;), except for ivshmem-server, which is lacking some selinux context (opened #1242014 for that)

-rwxrwxr-x. 1 root    root    system_u:object_r:svirt_tmpfs_t:s0     33554432 Jul 10 12:04 shmem0

however, as you can see, the shm created by qemu is 0777 & mask, this isn't so great...

otoh, if the user created the shm beforehand, it can have different perms, so I don't see much need to add a mode= argument/attribute to ivshmem.

> starting with ivshmem must be audited.  

that's the purpose of this bug, see patch

> There should also be a warning if
> the shm segment exists already!

This is limiting ivshmem potential use cases (which we don't know well enough), I think with the right selinux context that should cover a number of attack though.

Comment 2 Martin Kletzander 2015-07-12 10:44:55 UTC
(In reply to Marc-Andre Lureau from comment #1)
Another one is there already:

https://www.redhat.com/archives/libvir-list/2015-July/msg00316.html

Libvirt needs to make sure that the memory exists because then we can handle that.  We can make sure nobody else accessed it, it has the right label, qemu can access that and that we can unlink it after it's not used by any machine.

We will start the VM even if it exists already, it's just that we need to audit that so any admin going through the logs can see that VM that's starting will have access to already-existing shm block, therefore not being completely sealed from the rest of the system.

Comment 17 yalzhang@redhat.com 2017-03-02 10:54:41 UTC
Verified on below packages, the result is as expected, set this bug as verified.

# rpm -q libvirt qemu-kvm-rhev
libvirt-3.0.0-2.el7.x86_64
qemu-kvm-rhev-2.8.0-5.el7.x86_64

1. start a guest with ivshmem + ivshmem-plain + ivshmem-doorbell device:

# virsh dumpxml rhel7.3
...
   <shmem name='my_shmem1'>
      <model type='ivshmem-plain'/>
      <size unit='M'>4</size>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </shmem>
    <shmem name='my_shmem2'>
      <model type='ivshmem-doorbell'/>
      <server path='/tmp/ivshmem_socket'/>
      <msi ioeventfd='on'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
    </shmem>
    <shmem name='my_shmem3'>
      <model type='ivshmem'/>
      <size unit='M'>4</size>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
    </shmem>
...

# virsh start rhel7.3
error: Failed to start domain rhel7.3
error: internal error: process exited while connecting to monitor: 2017-03-02T07:47:17.568430Z qemu-kvm: -chardev socket,id=charshmem1,path=/tmp/ivshmem_socket: Failed to connect socket: No such file or directory

2. check audit log

type=VIRT_RESOURCE msg=audit(1488441043.591:2977): pid=25464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=start vm="rhel7.3" uuid=a7708061-faa0-42ce-897a-e92fb75fcf1d size=4194304 shmem="my_shmem1" server="?" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
type=VIRT_RESOURCE msg=audit(1488441043.591:2978): pid=25464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=start vm="rhel7.3" uuid=a7708061-faa0-42ce-897a-e92fb75fcf1d size=0 shmem="my_shmem2" server="/tmp/ivshmem_socket" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
type=VIRT_RESOURCE msg=audit(1488441043.591:2979): pid=25464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=start vm="rhel7.3" uuid=a7708061-faa0-42ce-897a-e92fb75fcf1d size=4194304 shmem="my_shmem3" server="?" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'

3. attach a ivshmem-plain device on a running guest:

# cat shmem.xml
  <shmem name='my_shmem0'>
    <model type='ivshmem-plain'/>
    <size unit='M'>4</size>
  </shmem>

# virsh attach-device rhel7.3 shmem.xml 
Device attached successfully

# ll /dev/shm | grep shmem
-rw-r--r--. 1 qemu qemu  4194304 Mar  2 18:51 my_shmem0

4. check audit log:

type=VIRT_RESOURCE msg=audit(1488441462.251:3014): pid=25464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=attach vm="rhel7.3" uuid=a7708061-faa0-42ce-897a-e92fb75fcf1d size=4194304 shmem="my_shmem0" server="?" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'

5. attach a ivshmem-doorbell device on a running guest and will get failure:

# virsh attach-device rhel7.3 shmem_door.xml 
error: Failed to attach device from shmem_door.xml
error: internal error: unable to execute QEMU command 'chardev-add': Failed to connect socket: No such file or directory

6. check audit log:

type=VIRT_RESOURCE msg=audit(1488441950.031:3038): pid=25464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=attach vm="rhel7.3" uuid=a7708061-faa0-42ce-897a-e92fb75fcf1d size=0 shmem="shmem_server" server="/tmp/socket-shmem" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'

7. detach a ivshmem-plain device:
# virsh detach-device rhel7.3 shmem.xml
Device detached successfully

8. check audit log:

type=VIRT_RESOURCE msg=audit(1488452036.139:3275): pid=25464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=detach vm="rhel7.3" uuid=a7708061-faa0-42ce-897a-e92fb75fcf1d size=4194304 shmem="my_shmem0" server="?" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'

Comment 18 Steve Grubb 2017-03-02 14:51:10 UTC
size=0 shmem="my_shmem2" server="/tmp/ivshmem_socket"

What are these fields? All audit fields need to be pre-approved so that the names are correct and if new names are needed, they are put in the field name dictionary at:

https://github.com/linux-audit/audit-documentation/blob/master/specs/fields/field-dictionary.csv

Comment 19 yalzhang@redhat.com 2017-03-03 01:42:25 UTC
Hi Martin,if we have plan to remove these 3 fields in the audit log or just add them into the dictionary?
It is discussed on https://bugzilla.redhat.com/show_bug.cgi?id=1389013#c13
Then I will verify this bug until a final conclusion.

Comment 20 Martin Kletzander 2017-03-03 13:00:09 UTC
I remember asking multiple times about the approach to be taken with no answer before.  It wasn't on any BZ I can find, so let's scratch that, I'll start from the beginning and fix this.  Since the auditing is in place already, it will be pretty easy.

Where should I raise my questions about the naming?  Is it linux-audit mailing list or via issues/pull-requests on github?  I need to figure out what fields to use and whether to add new ones (which I believe will be necessary).

Comment 21 Steve Grubb 2017-03-09 20:16:01 UTC
Sorry, was on PTO. Back now. The first issue is what are these fields? What are they recording? If you want to discuss on the mail list, then yes linux-audit would be a good place. I'd like to see what these are and if we can just update docs. The concern is that if shmem and server are encoded fields, then support must be added to the audit reporting tools or things get messed up. Thanks.

Comment 22 Martin Kletzander 2017-03-10 07:36:30 UTC
The shmem is the name of the shared memory region, shmem=asdf means it's /dev/shm/asdf.  The naming got stuck from previous version where the path was translated in QEMU.  We also add 'server' because that is the socket path to the ivshmem server, if used.  Are there any existing fields we could use for this?  I can't seem to find any.  If there are none, then we can take it on the mailing list to discuss further.

Comment 23 Steve Grubb 2017-03-10 18:44:27 UTC
These are not taken, but the "server" field concerns me. What you have is more of a path. I would expect "server" to be the name of a server. I would recommend changing the name. So, what is being assigned to the vm? The shmem or the path? And why does the path matter?  Feel free to start the discussion on linux-audit at any time. I am the moderator and can approve your emails without you needing to join the mail list.

Comment 24 yalzhang@redhat.com 2017-05-10 08:57:57 UTC
Hi Martin, any updates on this?

Comment 25 Martin Kletzander 2017-05-10 12:49:53 UTC
It completely slipped my mind.  I thought the ball was not in my park for some reason.  I just replied to Steve's idea and I have the final patch prepared which I will just tweak based on Steve's reply.  I'm keeping the needinfo on myself so it reminds me in case I won't get anywhere for a while.  Sorry for the wait and thanks for reminding me.

Comment 26 Martin Kletzander 2017-05-22 12:18:57 UTC
@Steve: I have a patch prepared and waiting, it was error on my part this was delayed (see previous comment #25).  Could you just let me know (yes/no answer is enough) if this is OK with you:

  https://www.redhat.com/archives/linux-audit/2017-May/msg00020.html

Thanks a lot in advance and one more apology on my part.

Comment 28 yalzhang@redhat.com 2017-06-08 03:31:12 UTC
Test on below packages:
libvirt-3.2.0-9.el7.x86_64
qemu-kvm-rhev-2.9.0-8.el7.x86_64

For the devices in step1, the related audit log shows below options:

ivshmem-plain:     resrc=shmem size=4194304 path=/dev/shm/my_shmem1  
ivshmem-doorbell:  resrc=ivshmem-socket  path=/tmp/ivshmem_socket 
ivshmem:           resrc=shmem size=4194304 path=/dev/shm/my_shmem3 

1. Add below content into guest's xml:

  <shmem name='my_shmem1'>
      <model type='ivshmem-plain'/>
      <size unit='M'>4</size>
    </shmem>
    <shmem name='my_shmem2'>
      <model type='ivshmem-doorbell'/>
      <server path='/tmp/ivshmem_socket'/>
      <msi ioeventfd='on'/>
    </shmem>
    <shmem name='my_shmem3'>
      <model type='ivshmem'/>
      <size unit='M'>4</size>
    </shmem>

2. # virsh start rhel7.4
error: Failed to start domain rhel7.4
error: internal error: process exited while connecting to monitor: 2017-06-08T02:21:08.052705Z qemu-kvm: -chardev pty,id=charserial0: char device redirected to /dev/pts/10 (label charserial0)
2017-06-08T02:21:08.053253Z qemu-kvm: -chardev socket,id=charshmem1,path=/tmp/ivshmem_socket: Failed to connect socket: No such file or directory

check the audit log:
type=VIRT_RESOURCE msg=audit(1496888468.180:29975): pid=19036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=start vm="rhel7.4" uuid=c156ca6f-3c16-435b-980d-9745e1d84ad1 size=4194304 path=/dev/shm/my_shmem1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
type=VIRT_RESOURCE msg=audit(1496888468.180:29976): pid=19036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=ivshmem-socket reason=start vm="rhel7.4" uuid=c156ca6f-3c16-435b-980d-9745e1d84ad1 path=/tmp/ivshmem_socket exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
type=VIRT_RESOURCE msg=audit(1496888468.180:29977): pid=19036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=start vm="rhel7.4" uuid=c156ca6f-3c16-435b-980d-9745e1d84ad1 size=4194304 path=/dev/shm/my_shmem3 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'

3. for attach

# cat shmem1.xml
<shmem name='test_shmeme'>
    <model type='ivshmem-plain'/>
    <size unit='M'>4</size>
  </shmem>

# virsh attach-device rhel7.4 shmem1.xml
Device attached successfully

type=VIRT_RESOURCE msg=audit(1496891672.208:30492): pid=21711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=attach vm="rhel7.4" uuid=c156ca6f-3c16-435b-980d-9745e1d84ad1 size=4194304 path=/dev/shm/test_shmeme exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'

# cat shmem2.xml
    <shmem name='my_shmem5'>
      <model type='ivshmem-doorbell'/>
      <server path='/tmp/ivshmem_socket'/>
      <msi ioeventfd='on'/>
    </shmem>

# virsh attach-device rhel7.4  shmem2.xml
error: Failed to attach device from shmem2.xml
error: internal error: unable to execute QEMU command 'chardev-add': Failed to connect socket: No such file or directory

type=VIRT_RESOURCE msg=audit(1496891755.045:30493): pid=21711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=ivshmem-socket reason=attach vm="rhel7.4" uuid=c156ca6f-3c16-435b-980d-9745e1d84ad1 path=/tmp/ivshmem_socket exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'

# cat shmem3.xml
   <shmem name='my_shmem3'>
      <model type='ivshmem'/>
      <size unit='M'>8</size>
    </shmem>

# virsh attach-device rhel7.4 shmem3.xml
error: Failed to attach device from shmem3.xml
error: Operation not supported: live attach of shmem model 'ivshmem' is not supported

no audit log

4. for detach

# virsh dumpxml rhel7.4 | grep /shmem -B5
    <shmem name='test_shmeme'>
      <model type='ivshmem-plain'/>
      <size unit='M'>4</size>
      <alias name='shmem0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </shmem>

# cat shmem1.xml
<shmem name='test_shmeme'>
    <model type='ivshmem-plain'/>
    <size unit='M'>4</size>
  </shmem>

# virsh detach-device rhel7.4  shmem1.xml
Device detached successfully

type=VIRT_RESOURCE msg=audit(1496891959.240:30494): pid=21711 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=shmem reason=detach vm="rhel7.4" uuid=c156ca6f-3c16-435b-980d-9745e1d84ad1 size=4194304 path=/dev/shm/test_shmeme exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'

Comment 29 errata-xmlrpc 2017-08-01 17:06:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 30 errata-xmlrpc 2017-08-01 23:48:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 31 errata-xmlrpc 2017-08-02 01:25:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846


Note You need to log in before you can comment on or make changes to this bug.