RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1631606 - Create luks vol failed when give the user access control for storage-vol API
Summary: Create luks vol failed when give the user access control for storage-vol API
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: yafu
URL:
Whiteboard:
Depends On:
Blocks: 1631608 1651787
TreeView+ depends on / blocked
 
Reported: 2018-09-21 03:23 UTC by Meina Li
Modified: 2019-08-06 13:14 UTC (History)
12 users (show)

Fixed In Version: libvirt-4.5.0-12.el7
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
: 1631608 (view as bug list)
Environment:
Last Closed: 2019-08-06 13:13:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:2294 0 None None None 2019-08-06 13:14:34 UTC

Description Meina Li 2018-09-21 03:23:42 UTC
Description of problem:
Create luks vol failed when give the user access control for storage-vol API

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.12.0-17.el7.x86_64
polkit-0.112-18.el7.x86_64
libvirt-4.5.0-10.el7.x86_64
kernel-3.10.0-948.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare the env.

1) Add a normal user 
#useradd test1 
#passwd test1

2) Enable 'polkit' as the Access control driver 
#vim /etc/libvirt/libvirtd.conf
access_drivers = [ "polkit" ] 

3) Granted local user permission to connect to libvirt in full read-write mode
#vi /etc/libvirt/libvirtd.conf 
unix_sock_rw_perms = "0777"
auth_unix_rw = "none"

4.Restart libvirtd and polkit:
# systemctl restart libvirtd

2. Prepare a pool.
# virsh pool-dumpxml virt-dir-pool
setlocale: No such file or directory
<pool type='dir'>
  <name>virt-dir-pool</name>
  <uuid>96210010-80ea-48b1-91ea-22d825bb3035</uuid>
  <capacity unit='bytes'>53660876800</capacity>
  <allocation unit='bytes'>8423579648</allocation>
  <available unit='bytes'>45237297152</available>
  <source>
  </source>
  <target>
    <path>/var/tmp/dir-pool</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
      <label>unconfined_u:object_r:user_tmp_t:s0</label>
    </permissions>
  </target>
</pool>

3. Prepare a luks vol.
1) # virsh secret-list
setlocale: No such file or directory
 UUID                                  Usage
--------------------------------------------------------------------------------
 48cf443a-f0bd-4583-92ec-82b472c4101a  volume /var/tmp/puppyname.img

2) # cat /var/tmp/vol-luks.xml
<volume>
  <name>vol_create_test_1</name>
  <capacity >10485760</capacity>
<allocation>1048576</allocation>
  <target>
    <path>/var/tmp/puppyname.img</path>
    <format type='raw'/>
    <encryption format='luks'>
       <secret type='passphrase' uuid='48cf443a-f0bd-4583-92ec-82b472c4101a'/>
    </encryption>
  </target>
</volume>

4. Give user the permission to create the vol.
# cat /etc/polkit-1/rules.d/100-libvirt-acl.rules
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.api.storage-vol.create" &&
subject.user == "test1") {
if (action.lookup("connect_driver") == 'QEMU') {
return polkit.Result.YES;
} else {
return polkit.Result.NO;
}
}
if (action.id == 'org.libvirt.api.secret.read-secure' && subject.user == 'test1') {
if (action.lookup('connect_driver') == 'QEMU') {
return polkit.Result.YES;
} else {
return polkit.Result.NO;
}
}});

# systemctl restart polkit

5. Create luks vol by user test1.
# su - test1 -c '/usr/bin/virsh -c 'qemu:///system' vol-create --pool virt-dir-pool --file /var/tmp/vol-luks.xml'
error: Failed to create vol from /var/tmp/vol-luks.xml
error: access denied

Actual results:
Create luks vol failed

Expected results:
Can create luks vol successfully

Additional info:
# tail -f /var/log/libvirtd_debug.log | grep error
2018-09-20 09:21:37.946+0000: 30429: error : virPolkitCheckAuth:131 : authentication failed: access denied by policy
2018-09-20 09:21:37.946+0000: 30429: error : virStorageVolCreateXMLEnsureACL:7995 : access denied

Comment 3 yafu 2018-09-21 08:42:58 UTC
Similar issue happens when trying to start guest with an interface bound to nwfilter.

Test steps:
1.Prepare env as step1 in comment 0;

2.Add polkit rules in :
polkit.addRule(function(action, subject) {
   if (action.id == 'org.libvirt.api.nwfilter-binding.create' && subject.user == 'test1') {
        if (action.lookup('connect_driver') == 'QEMU') {
            return polkit.Result.YES;
        } else {
            return polkit.Result.NO;
        }
    }
   
     if (action.id == "org.libvirt.api.domain.start" && subject.user == "test1") {
        if (action.lookup("connect_driver") == 'QEMU' && action.lookup("domain_name") == '76') {
           return polkit.Result.YES;
        } else {
           return polkit.Result.NO;
        }
     }
});

3.Define a guest with an interface bound to nwfilter:
#virsh edit 76
 <interface type='network'>
      <mac address='52:54:00:fa:81:87'/>
      <source network='default'/>
      <model type='rtl8139'/>
      <filterref filter='clean-traffic'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

4.Start guest by user test1:
su - test1 -c '/usr/bin/virsh -c 'qemu:///system' start 76'
error: Failed to start domain 76
error: access denied


5.Check the libvirtd log:
#cat /var/log/libvirt/libvirtd.log | grep -i polkit
2018-09-21 08:32:13.371+0000: 13186: debug : virAccessManagerCheckNWFilterBinding:306 : manager=0x561fdb120330(name=polkit) driver=nwfilter binding=0x7f0554036b00 perm=2
2018-09-21 08:32:13.371+0000: 13186: debug : virAccessDriverPolkitCheck:138 : Check action 'org.libvirt.api.nwfilter-binding.create' for process '23218' time 17596535 uid 1002
2018-09-21 08:32:13.371+0000: 13186: info : virPolkitCheckAuth:78 : Checking PID 23218 running as 1002
2018-09-21 08:32:13.402+0000: 13186: debug : virPolkitCheckAuth:115 : is auth ****0*****  is challenge 0
2018-09-21 08:32:13.402+0000: 13186: error : virPolkitCheckAuth:131 : authentication failed: access denied by policy

6.User test1 can create binding filter using 'virsh nwfilter-binding-create':
# su - test1 -c '/usr/bin/virsh -c 'qemu:///system' nwfilter-binding-create /tmp/bind.xml'
Network filter binding on vnet0 created from /tmp/bind.xml

7.Check libvirtd log for step 6:
2018-09-21 08:39:06.410+0000: 13186: debug : virAccessManagerCheckNWFilterBinding:306 : manager=0x561fdb120330(name=polkit) driver=QEMU binding=0x7f055402ec00 perm=2
2018-09-21 08:39:06.410+0000: 13186: debug : virAccessDriverPolkitCheck:138 : Check action 'org.libvirt.api.nwfilter-binding.create' for process '6827' time 18985553 uid 1002
2018-09-21 08:39:06.410+0000: 13186: info : virPolkitCheckAuth:78 : Checking PID 6827 running as 1002
2018-09-21 08:39:06.412+0000: 13186: debug : virPolkitCheckAuth:115 : is auth ****1****  is challenge 0

Comment 5 John Ferlan 2018-09-22 14:37:12 UTC
Well, I believe this is a "byproduct" (so to speak) of splitting up the connections for each state driver.  The lines:

    if (action.lookup('connect_driver') == 'QEMU')

assume that the QEMU hypervisor_driver was being used for the connection; however, that has changed and now the various state drivers will have their own connection.

For storage connections, that would be "storage", secret would be "secret", nwfilter uses "NWFilter"


Note that the command that you used that worked:

# su - test1 -c '/usr/bin/virsh -c 'qemu:///system' nwfilter-binding-create /tmp/bind.xml'
Network filter binding on vnet0 created from /tmp/bind.xml

Well that used "-c qemu:///system' and thus QEMU as the connection driver, so that's why it worked.

Thus, I think in order for things to work now, you'll need to use for the base problem:

...
if (action.id == "org.libvirt.api.storage-vol.create" &&
subject.user == "test1") {
if (action.lookup("connect_driver") == 'storage') {
...

and for the followup problem:

...
   if (action.id == 'org.libvirt.api.nwfilter-binding.create' && subject.user == 'test1') {
        if (action.lookup('connect_driver') == 'NWFilter') {
...

Can a retest be done with changing the connect_driver name in your test?

I'm not sure this is "documented well enough" - at least with respect to the "name" to use. That can be far more easily done that the alternative!  In the long run though - it's the direction we're heading to have more connection orientation rather than everything running through the hypervisor connection.

Comment 6 John Ferlan 2018-09-22 16:09:26 UTC
<sigh> try 'nwfilter' for the nwfilter connect - I was looking at the wrong name in retrospect as I dug deeper and started writing some doc adjustments.

Comment 7 yafu 2018-09-25 02:28:02 UTC
(In reply to John Ferlan from comment #5)
> Well, I believe this is a "byproduct" (so to speak) of splitting up the
> connections for each state driver.  The lines:
> 
>     if (action.lookup('connect_driver') == 'QEMU')
> 
> assume that the QEMU hypervisor_driver was being used for the connection;
> however, that has changed and now the various state drivers will have their
> own connection.
> 
> For storage connections, that would be "storage", secret would be "secret",
> nwfilter uses "NWFilter"
> 
> 
> Note that the command that you used that worked:
> 
> # su - test1 -c '/usr/bin/virsh -c 'qemu:///system' nwfilter-binding-create
> /tmp/bind.xml'
> Network filter binding on vnet0 created from /tmp/bind.xml
> 
> Well that used "-c qemu:///system' and thus QEMU as the connection driver,
> so that's why it worked.
> 
> Thus, I think in order for things to work now, you'll need to use for the
> base problem:
> 
> ...
> if (action.id == "org.libvirt.api.storage-vol.create" &&
> subject.user == "test1") {
> if (action.lookup("connect_driver") == 'storage') {
> ...
> 
> and for the followup problem:
> 
> ...
>    if (action.id == 'org.libvirt.api.nwfilter-binding.create' &&
> subject.user == 'test1') {
>         if (action.lookup('connect_driver') == 'NWFilter') {
> ...
> 
> Can a retest be done with changing the connect_driver name in your test?
> 
> I'm not sure this is "documented well enough" - at least with respect to the
> "name" to use. That can be far more easily done that the alternative!  In
> the long run though - it's the direction we're heading to have more
> connection orientation rather than everything running through the hypervisor
> connection.

Can create luks vol successfully with polkit rule as follows:
    if (action.id == 'org.libvirt.api.storage-vol.create' && subject.user == 'test1') {
        if (action.lookup('connect_driver') == 'QEMU') {
            return polkit.Result.YES;
        } else {
            return polkit.Result.NO;
        }
    }
    if (action.id == 'org.libvirt.api.secret.read-secure' && subject.user == 'test1') {
        if (action.lookup('connect_driver') == 'secret' ) {
            return polkit.Result.YES;
        } else {
            return polkit.Result.NO;
        }
    }

Failed to create luks vol if i changed the change connection to 'storage' for action item ''org.libvirt.api.storage-vol.create'. Did it work as expected?

And can start guest with nwfilter binding successfully with polkit rules:
   if (action.id == 'org.libvirt.api.nwfilter-binding.create' && subject.user == 'test1') {
        if (action.lookup('connect_driver') == 'nwfilter') {
            return polkit.Result.YES;
        } else {
            return polkit.Result.NO;
        }
    if (action.id == "org.libvirt.api.domain.start" &&
       subject.user == "test1") {
       if (action.lookup("connect_driver") == 'QEMU' &&
           action.lookup("domain_name") == '76') {
         return polkit.Result.YES;
       } else {
         return polkit.Result.NO;
    }
    }

    }

Comment 9 John Ferlan 2018-09-26 12:36:21 UTC
(In reply to yafu from comment #7)
> (In reply to John Ferlan from comment #5)
> > Well, I believe this is a "byproduct" (so to speak) of splitting up the
> > connections for each state driver.  The lines:
> > 
> >     if (action.lookup('connect_driver') == 'QEMU')
> > 
> > assume that the QEMU hypervisor_driver was being used for the connection;
> > however, that has changed and now the various state drivers will have their
> > own connection.
> > 
> > For storage connections, that would be "storage", secret would be "secret",
> > nwfilter uses "NWFilter"
> > 
> > 
> > Note that the command that you used that worked:
> > 
> > # su - test1 -c '/usr/bin/virsh -c 'qemu:///system' nwfilter-binding-create
> > /tmp/bind.xml'
> > Network filter binding on vnet0 created from /tmp/bind.xml
> > 
> > Well that used "-c qemu:///system' and thus QEMU as the connection driver,
> > so that's why it worked.
> > 
> > Thus, I think in order for things to work now, you'll need to use for the
> > base problem:
> > 
> > ...
> > if (action.id == "org.libvirt.api.storage-vol.create" &&
> > subject.user == "test1") {
> > if (action.lookup("connect_driver") == 'storage') {
> > ...
> > 
> > and for the followup problem:
> > 
> > ...
> >    if (action.id == 'org.libvirt.api.nwfilter-binding.create' &&
> > subject.user == 'test1') {
> >         if (action.lookup('connect_driver') == 'NWFilter') {
> > ...
> > 
> > Can a retest be done with changing the connect_driver name in your test?
> > 
> > I'm not sure this is "documented well enough" - at least with respect to the
> > "name" to use. That can be far more easily done that the alternative!  In
> > the long run though - it's the direction we're heading to have more
> > connection orientation rather than everything running through the hypervisor
> > connection.
> 
> Can create luks vol successfully with polkit rule as follows:
>     if (action.id == 'org.libvirt.api.storage-vol.create' && subject.user ==
> 'test1') {
>         if (action.lookup('connect_driver') == 'QEMU') {
>             return polkit.Result.YES;
>         } else {
>             return polkit.Result.NO;
>         }
>     }
>     if (action.id == 'org.libvirt.api.secret.read-secure' && subject.user ==
> 'test1') {
>         if (action.lookup('connect_driver') == 'secret' ) {
>             return polkit.Result.YES;
>         } else {
>             return polkit.Result.NO;
>         }
>     }
> 
> Failed to create luks vol if i changed the change connection to 'storage'
> for action item ''org.libvirt.api.storage-vol.create'. Did it work as
> expected?

Yes, but with an assumption and a caveat.

I assume you're using the same command from the original posting:

    su - test1 -c '/usr/bin/virsh -c 'qemu:///system' vol-create --pool virt-dir-pool --file /var/tmp/vol-luks.xml'


Since QEMU has been historically allowed to directly access other drivers, then it's the primary connection, thus for the "vol-create" command it's fine for 'QEMU' to be used as the argument and using 'storage' wouldn't work. However, if you changed your command to use '-c storage:///system' and had the corresponding change in the polkit rule, then that "should" work.  In this case, the "secondary" connection is via the secret driver in order to lookup the secret to use for the creation, thus the secret rule would still apply for the secondary connection (regardless of primary connection).

BTW: I assume a polkit rule could be written as:

    if (action.lookup('connect_driver') == 'QEMU' ||
        action.lookup('connect_driver') == 'storage') {
...

in order to cover "both" possibilities. That probably "eases" the pain.

> 
> And can start guest with nwfilter binding successfully with polkit rules:
>    if (action.id == 'org.libvirt.api.nwfilter-binding.create' &&
> subject.user == 'test1') {
>         if (action.lookup('connect_driver') == 'nwfilter') {
>             return polkit.Result.YES;
>         } else {
>             return polkit.Result.NO;
>         }
>     if (action.id == "org.libvirt.api.domain.start" &&
>        subject.user == "test1") {
>        if (action.lookup("connect_driver") == 'QEMU' &&
>            action.lookup("domain_name") == '76') {
>          return polkit.Result.YES;
>        } else {
>          return polkit.Result.NO;
>     }
>     }
> 
>     }

In this case, the domain startup is the primary connection, thus 'QEMU' would be used to pass the domain startup. Then when creating an NWFilter binding, the "nwfilter" driver becomes the secondary connection.

This is going to be painful to describe... say nothing of how confusing it'll be for someone trying to write polkit rules.

Comment 11 John Ferlan 2018-11-04 13:19:25 UTC
I neglected to note, I posted patches to alter the error message and update the docs to describe the primary/secondary concerns, see:

https://www.redhat.com/archives/libvir-list/2018-October/msg00809.html

The error message will now list the driver by name that failed the connection, so that should help when writing rules.

After that there's been some conversations regarding the changes, the following is a cut-n-paste of IRC conversation between danpb and myself, with laine chiming in at the end:

<jferlan> danpb: meant to talk to you about it at KVM Forum, but forgot... I just ping'd on a series related to how the connection separation changes (months ago) have impacted which connection driver name is used for some polkit calls (non primary) and would be interested obviously in your thoughts.
...
<danpb> jferlan: i talked to someone at the kvm forum about that, but can't remember who
<jferlan> Erik talked to me briefly about it, but too many other things happened to recall all the context
<danpb> jferlan: the change you cc'd me on about ACLs is just a docs clarification ?
<jferlan> danpb: well a bit more than just docs clarification, but mostly yes.  The other part is a slight change to the error message to include the name of/from the driver that failed the VIR_ERR_ACCESS_DENIED
<jferlan> Previously it was going to be 'qemu' for just about everything, now with secondary connections it depends on the API... I figured a bit more information would be helpful to those writing the polkit rules
<danpb> yeah, ok - in theory we could keep back compat
<danpb> but i think in the long run this will make more sense
<danpb> as each driver will be a separate daemon
<danpb> and you'd want consistent ACL rules regardless of whether something connected straight to network:/// daemon, or indirectly via a qemu:///  call that delegated to the network daemon
<jferlan> with the changes made a few releases ago of course, those rules will need to be different now when say the primary connection (whatever it is) makes the call to some secondary driver.  I have to wonder how many have QEMU in the rules and will now be fscked
<danpb> the plus side is that the number of people using this feature is probably single digits 
<laine> So this is only an issue if somebody is using polkitd, *and* they are connecting directly to the secondary drivers? If that's the case, then I think that "single digit" likely has a single value, and that value is "0". :-P
<jferlan> laine: if someone connects to say storage:/// but that needs to contact the secret:/// driver, then "secret" would be used instead of "storage". 
<jferlan> of course the "wonder" is how many use "qemu:///" to perform "storage" operations?  In that case, "QEMU" is the primary, but "secret" is still the secondary
<jferlan> btw: upstream, agree with your assessment and furthermore that zero could be extrapolated even further.  Downstream though that could be different.  I think the problem was seen by some avocado-vt test, so there's at least 1 consumer.
<laine> jferlan: ah, didn't follow it all the way through. Without spending any time at all pondering consequences, I think it should validate based on the original connection URL.
<laine> jferlan: testers aren't real consumers - they can be told "don't do that, it's illegal".
<laine> (in some cases that's not the correct resolution, in others it is)
<jferlan> don't disagree completely, although the challenge will be the it's harder to do than say... 
<jferlan> that is using the primary connection... 
<jferlan> testing is a different story...
<laine> jferlan: "If I don't know how to do it, it must be trivial".
<jferlan> there was a fairly careful extraction of the primary connection from the code, now code would have to keep two connections around - one just for polkit and the other for API's to use... Feels odd
<jferlan> and perhaps too much work for the "handful" that really care 


It seems from the conversations that there's agreement that using the non primary name even though perhaps causing some back-compat issues is the right thing to do. Especially as we move forward with splitting up driver functions where a hypervisor (e.g. QEMU) connection may not be the "default" any more.

Comment 12 John Ferlan 2018-11-05 12:31:04 UTC
Patches are now pushed upstream:

commit 4f1107614dc1384c4aa7a5582a16aecba8b9310f
Author: John Ferlan <jferlan>
Date:   Sun Sep 23 11:56:46 2018 -0400

    docs: Enhance polkit documentation to describe secondary connection
    
    ...
    
    Since commit 8259255 usage of a primary connection driver for
    a virConnect has been modified to open (virConnectOpen) and use
    a connection to the specific driver in order to handle the API
    calls to/for that driver. This causes some confusion and issues
    for ACL polkit rule scripts to know exactly which driver by
    name will be used.
    
    Add some documentation describing the processing of the primary
    and secondary connection as well as the list of the connect_driver
    names used for each driver.
    
    Signed-off-by: John Ferlan <jferlan>
    ACKed-by: Michal Privoznik <mprivozn>


$ git describe 4f1107614dc1384c4aa7a5582a16aecba8b9310f
v4.9.0-7-g4f1107614d
$

Comment 13 John Ferlan 2018-11-14 19:26:37 UTC
<sigh> Turns out part of the patches I pushed were wrong, so I ended up posting two patches to revert and rework the logic see patch1 and patch2 of the series:

https://www.redhat.com/archives/libvir-list/2018-November/msg00434.html

Both were ACK'd and pushed upstream:

commit 605496be609e153526fcdd3e98df8cf5244bc8fa
Author: John Ferlan <jferlan>
Date:   Mon Nov 12 08:15:02 2018 -0500

    access: Modify the VIR_ERR_ACCESS_DENIED to include driverName
    
    ...
    
    Changes made to manage and utilize a secondary connection
    driver to APIs outside the scope of the primary connection
    driver have resulted in some confusion processing polkit rules
    since the simple "access denied" error message doesn't provide
    enough of a clue when combined with the "authentication failed:
    access denied by policy" as to which connection driver refused
    or failed the ACL check.
    
    In order to provide some context, let's modify the existing
    "access denied" error returned from the various vir*EnsureACL
    API's to provide the connection driver name that is causing
    the failure. This should provide the context for writing the
    polkit rules that would allow access via the driver, but yet
    still adhere to the virAccessManagerSanitizeError commentary
    regarding not telling the user why access was denied.

...

$ git describe 605496be609e153526fcdd3e98df8cf5244bc8fa
v4.9.0-55-g605496be60
$

....

commit b08396a5feab02fb3bb595603c888ee733aa178e
Author: John Ferlan <jferlan>
Date:   Mon Nov 12 07:33:06 2018 -0500

    Revert "access: Modify the VIR_ERR_ACCESS_DENIED to include driverName"
    
    This reverts commit ccc72d5cbdd85f66cb737134b3be40aac1df03ef.
    
    Based on upstream comment to a follow-up patch, this didn't take the
    right approach and the right thing to do is revert and rework.
    
...

$ git describe b08396a5feab02fb3bb595603c888ee733aa178e
v4.9.0-54-gb08396a5fe
$

Comment 18 yafu 2019-04-12 09:30:17 UTC
Reproduced with libvirt-4.5.0-10.el7.x86_64.

Verified with libvirt-4.5.0-12.el7.x86_64.
Test steps:
Prepare the env.
1) Add a normal user 
#useradd test1 
#passwd test1

2) Enable 'polkit' as the Access control driver 
#vim /etc/libvirt/libvirtd.conf
access_drivers = [ "polkit" ] 

3) Granted local user permission to connect to libvirt in full read-write mode
#vi /etc/libvirt/libvirtd.conf 
unix_sock_rw_perms = "0777"
auth_unix_rw = "none"

4)Restart libvirtd and polkit:
# systemctl restart libvirtd

1.Interface driver:
#su - test1 -c '/usr/bin/virsh -c 'interface:///system' iface-destroy br0'
error: Failed to destroy interface br0
error: access denied: 'interface' denied access

2.Network driver:
#su - test1 -c '/usr/bin/virsh -c 'network:///system' net-destroy default'
error: Failed to destroy network default
error: access denied: 'network' denied access

3.Nodedevice driver:
# su - test1 -c '/usr/bin/virsh -c 'nodedev:///system' nodedev-dumpxml pci_0000_ff_16_7'
error: access denied: 'nodedev' denied access

4.nwfilter driver:
4.1#su - test1 -c '/usr/bin/virsh -c 'nwfilter:///system' nwfilter-undefine qemu-announce-self'
error: Failed to undefine network filter qemu-announce-self
error: access denied: 'nwfilter' denied access
4.2 Repeate steps1-3 in comment 3 ;
4.3 # su - test1 -c '/usr/bin/virsh -c 'qemu:///system' start avocado-vt-vm1'
error: Failed to start domain avocado-vt-vm1
error: access denied: 'nwfilter' denied access

5.secret driver:
5.1 Repeate steps1-4 in comment 0:
5.2 # su - test1 -c '/usr/bin/virsh -c 'qemu:///system' vol-create --pool virt-dir-pool --file /xml/vol-luks.xml'
error: Failed to create vol from /xml/vol-luks.xml
error: access denied: 'secret' denied access

6.Storage driver:
6.1# su - test1 -c '/usr/bin/virsh -c 'storage:///system' pool-destroy default'
error: Failed to destroy pool default
error: access denied: 'storage' denied access
6.2# su - test1 -c '/usr/bin/virsh -c 'storage:///system' vol-list default'
error: Failed to list volumes
error: access denied: 'storage' denied access

Comment 20 errata-xmlrpc 2019-08-06 13:13:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2294


Note You need to log in before you can comment on or make changes to this bug.