Bug 1924218 - libvirtd[4724]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng_apply
Summary: libvirtd[4724]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not havi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: libvirt
Version: 34
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 1925094 1940791 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-02 20:08 UTC by Steve Grubb
Modified: 2021-09-30 01:12 UTC (History)
41 users (show)

Fixed In Version: libvirt-7.0.0-6.fc34 libvirt-7.0.0-7.fc34
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-30 01:12:50 UTC
Type: Bug


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1940791 1 unspecified CLOSED libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng_apply 2021-05-19 22:25:29 UTC
Red Hat Bugzilla 1952715 1 unspecified NEW libcap-ng used by "/usr/sbin/irqbalance" failed due to not having CAP_SETPCAP in capng_apply 2022-01-03 12:24:01 UTC

Internal Links: 1952715

Description Steve Grubb 2021-02-02 20:08:58 UTC
Description of problem:
Libcap-ng-0.8.2 does better error detection of some problems that were previously hidden. A patched version of libcap-ng is now emitting warnings when it sees a problem in the use of the API.

Currently there is a call to capng_apply which is apparently clearing the bounding set. However, libvirt doesn't have CAP_SETPCAP which means that it cannot change the bounding set. Switching from  capng_apply(CAPNG_SELECT_BOTH);   to   capng_apply(CAPNG_SELECT_CAPS); should fix the issue. But it doesn't clear the bounding set. On the otherhand, the bounding set wasn't getting cleared in the first place. If it needed to be cleared, it should be done earlier in the process when libvirt had full capabilities.

At some point in the future, libcap-ng will start passing the real errors to user space so that these problems can be detected during development.

Version-Release number of selected component (if applicable):
libvirt-7.0.0-1

Comment 1 Daniel Berrangé 2021-02-03 09:56:10 UTC
(In reply to Steve Grubb from comment #0)
> Description of problem:
> Libcap-ng-0.8.2 does better error detection of some problems that were
> previously hidden. A patched version of libcap-ng is now emitting warnings
> when it sees a problem in the use of the API.
> 
> Currently there is a call to capng_apply which is apparently clearing the
> bounding set. However, libvirt doesn't have CAP_SETPCAP which means that it
> cannot change the bounding set. Switching from 
> capng_apply(CAPNG_SELECT_BOTH);   to   capng_apply(CAPNG_SELECT_CAPS);
> should fix the issue. But it doesn't clear the bounding set. On the
> otherhand, the bounding set wasn't getting cleared in the first place. If it
> needed to be cleared, it should be done earlier in the process when libvirt
> had full capabilities.

Looking at the two place where libvirt uses SELECT_BOTH, we should have full caps in both cases.

Can you give more details on the scenario in which you're triggering the warning in libvirt

Comment 2 Steve Grubb 2021-02-03 13:56:30 UTC
The error message is because it is falling into the else branch here:
https://github.com/stevegrubb/libcap-ng/blob/master/src/cap-ng.c#L729

Fedora is patched to not return the -4 and to log that it failed.

Comment 3 Daniel Berrangé 2021-02-03 14:26:38 UTC
What libvirt functionality are you using to trigger this ?

Comment 4 Steve Grubb 2021-02-03 14:32:58 UTC
This is in the logs after system boot. If I stop and start libvirt, it picks up 2 new warnings.

Comment 5 Daniel Berrangé 2021-02-03 16:26:05 UTC
We have some code to conditionally preserve  CAP_SETPCAP in case where we expect to need it

https://gitlab.com/libvirt/libvirt/-/blob/master/src/util/virutil.c#L1203

The later code which actually does the     capng_apply(CAPNG_SELECT_BOUNDS); call is unconditional though:

https://gitlab.com/libvirt/libvirt/-/blob/master/src/util/virutil.c#L1270


IOW, it looks like we were silently just expecting the capng_apply() to fail in the cases where we didn't keep CAP_SETPCAP

This previous harmless failure now triggers the warning message, so guess we should make the call to capng_apply conditional on whether we really need it.

Comment 6 Steve Grubb 2021-02-03 20:04:27 UTC
Well, at least whether or not to include the BOUNDING_SET should be conditional. You can still change caps in some cases if it's to lower them. I apologize for the behavior change, but some people really needed a hard error returned because they needed to know the bounding set was still wide open even though it was supposedly cleared.

Comment 7 Daniel Berrangé 2021-02-04 12:05:41 UTC
*** Bug 1925094 has been marked as a duplicate of this bug. ***

Comment 8 Ben Cotton 2021-02-09 16:01:59 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 34 development cycle.
Changing version to 34.

Comment 9 Alexander Murashkin 2021-05-05 01:03:27 UTC
Could somebody clarify is this issue is supposed to be resolved in F34? And if not, is it just a warning that can be ignored?

The bug is also reported as bug 1940791.

Comment 10 Michal Ambroz 2021-05-10 20:17:59 UTC
Still a thing in F34. After upgrade to F34 I am seeing this:
$ sudo systemctl status libvirtd
● libvirtd.service - Virtualization daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2021-05-10 22:02:04 CEST; 4s ago
TriggeredBy: ● libvirtd-admin.socket
             ● libvirtd-ro.socket
             ● libvirtd.socket
       Docs: man:libvirtd(8)
             https://libvirt.org
   Main PID: 259911 (libvirtd)
      Tasks: 21 (limit: 32768)
     Memory: 40.1M
        CPU: 399ms
     CGroup: /system.slice/libvirtd.service
             ├─  1771 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
             ├─  1772 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
             └─259911 /usr/sbin/libvirtd --timeout 120

May 10 22:02:04 example.com systemd[1]: Starting Virtualization daemon...
May 10 22:02:04 example.com systemd[1]: Started Virtualization daemon.
May 10 22:02:04 example.com libvirtd[259931]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng_apply
May 10 22:02:04 example.com libvirtd[259932]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng_apply
May 10 22:02:04 example.com dnsmasq[1771]: read /etc/hosts - 23 addresses
May 10 22:02:04 example.com dnsmasq[1771]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
May 10 22:02:04 example.com dnsmasq-dhcp[1771]: read /var/lib/libvirt/dnsmasq/default.hostsfile


$ rpm -q libvirt-daemon libcap-ng
libvirt-daemon-7.0.0-4.fc34.x86_64
libcap-ng-0.8.2-4.fc34.x86_64

Comment 11 Timothée Ravier 2021-05-11 18:27:31 UTC
According to https://bugzilla.redhat.com/show_bug.cgi?id=1952715#c4, a rebuild should be enough

Comment 12 Steve Grubb 2021-05-11 19:07:59 UTC
(In reply to Timothée Ravier from comment #11)
> According to https://bugzilla.redhat.com/show_bug.cgi?id=1952715#c4, a
> rebuild should be enough

I looked into that. The problem there is in the service file.

Comment 13 Nicolas 2021-05-14 08:40:56 UTC
Just confirming I'm observing the same behaviour:

 sudo systemctl status libvirtd.service
○ libvirtd.service - Virtualization daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Fri 2021-05-14 09:28:49 BST; 9min ago
TriggeredBy: ● libvirtd-ro.socket
             ● libvirtd-admin.socket
             ● libvirtd.socket
       Docs: man:libvirtd(8)
             https://libvirt.org
    Process: 45798 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 45798 (code=exited, status=0/SUCCESS)
      Tasks: 2 (limit: 32768)
     Memory: 30.7M
        CPU: 370ms
     CGroup: /system.slice/libvirtd.service
             ├─1218 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
             └─1219 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

May 14 09:26:49 mun1n systemd[1]: Starting Virtualization daemon...
May 14 09:26:49 mun1n systemd[1]: Started Virtualization daemon.
May 14 09:26:49 mun1n libvirtd[45821]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng_apply
May 14 09:26:49 mun1n libvirtd[45822]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng_apply
May 14 09:26:49 mun1n dnsmasq[1218]: read /etc/hosts - 2 addresses
May 14 09:26:49 mun1n dnsmasq[1218]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
May 14 09:26:49 mun1n dnsmasq-dhcp[1218]: read /var/lib/libvirt/dnsmasq/default.hostsfile
May 14 09:28:49 mun1n systemd[1]: libvirtd.service: Deactivated successfully.
May 14 09:28:49 mun1n systemd[1]: libvirtd.service: Unit process 1218 (dnsmasq) remains running after unit stopped.
May 14 09:28:49 mun1n systemd[1]: libvirtd.service: Unit process 1219 (dnsmasq) remains running after unit stopped.
$ rpm -q libvirt-daemon libcap-ng
libvirt-daemon-7.0.0-4.fc34.x86_64
libcap-ng-0.8.2-4.fc34.x86_64
$

Comment 14 Cole Robinson 2021-05-19 22:25:29 UTC
*** Bug 1940791 has been marked as a duplicate of this bug. ***

Comment 15 Joerg K 2021-05-20 18:55:45 UTC
Time appropriate greetings,

I'd like to confirm that I see this issue in F34 not with libvirt but with `/usr/sbin/mount.cifs`. The exact error showing up in journald is:

"Mai 20 20:42:29 t14s mount.cifs[11933]: libcap-ng used by "/usr/sbin/mount.cifs" failed due to not having CAP_SETPCAP in capng_apply"

~~~
$ rpm -q cifs-utils libcap-ng
cifs-utils-6.11-3.fc34.x86_64
libcap-ng-0.8.2-4.fc34.x86_64
~~~

Regards,  
Joerg

Comment 16 Steve Grubb 2021-05-20 19:00:28 UTC
(In reply to Joerg K from comment #15)
> Time appropriate greetings,
> 
> I'd like to confirm that I see this issue in F34 not with libvirt but with
> `/usr/sbin/mount.cifs`. The exact error showing up in journald is:
> 
> "Mai 20 20:42:29 t14s mount.cifs[11933]: libcap-ng used by
> "/usr/sbin/mount.cifs" failed due to not having CAP_SETPCAP in capng_apply"
> 
> ~~~
> $ rpm -q cifs-utils libcap-ng
> cifs-utils-6.11-3.fc34.x86_64
> libcap-ng-0.8.2-4.fc34.x86_64
> ~~~


That should be reported against the cifs-utils package. The issue is a case by case fix to the program being reported. The fix can be as simple as changing the capng_apply from SELECT_BOTH to SELECT_CAPS.

Comment 17 Joerg K 2021-05-20 19:11:50 UTC
(In reply to Steve Grubb from comment #16)
> That should be reported against the cifs-utils package. The issue is a case
> by case fix to the program being reported. The fix can be as simple as
> changing the capng_apply from SELECT_BOTH to SELECT_CAPS.

Thanks for letting me know. I'll report it against the cifs-utils package.

Comment 18 Pranav 2021-06-01 16:15:03 UTC
clean flashed Fedro 34. I'm having this same issue.

"libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng_apply"

Hope this issue solve soon...

Comment 19 RobbieTheK 2021-06-07 14:25:59 UTC
Is there a test package available to fix this?

mount.cifs[915619]: libcap-ng used by "/usr/sbin/mount.cifs" failed due to not having CAP_SETPCAP in capng_apply

rpm -q cifs-utils libcap-ng
cifs-utils-6.11-3.fc34.x86_64
libcap-ng-0.8.2-4.fc34.x86_64

Comment 20 Michal Privoznik 2021-06-25 07:27:44 UTC
D'oh! I completely missed this bug and analysis here. I've sent a patch because of a RHEL variant of this bug:  bug 1949388

https://listman.redhat.com/archives/libvir-list/2021-June/msg00744.html

Comment 21 Michal Privoznik 2021-06-29 06:56:12 UTC
Merged upstream as:

438b50dda8 virSetUIDGIDWithCaps: Don't drop CAP_SETPCAP right away

v7.5.0-rc1-4-g438b50dda8

Comment 22 Fedora Update System 2021-06-29 15:34:56 UTC
FEDORA-2021-6679746a3d has been submitted as an update to Fedora 34. https://bodhi.fedoraproject.org/updates/FEDORA-2021-6679746a3d

Comment 23 Fedora Update System 2021-06-30 14:25:33 UTC
FEDORA-2021-6679746a3d has been pushed to the Fedora 34 testing repository.
Soon you'll be able to install the update with the following command:
`sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2021-6679746a3d`
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2021-6679746a3d

See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.

Comment 24 Fedora Update System 2021-07-03 01:04:52 UTC
FEDORA-2021-bc6ad65da0 has been pushed to the Fedora 34 testing repository.
Soon you'll be able to install the update with the following command:
`sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2021-bc6ad65da0`
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2021-bc6ad65da0

See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.

Comment 25 Fedora Update System 2021-07-13 01:14:28 UTC
FEDORA-2021-bc6ad65da0 has been pushed to the Fedora 34 stable repository.
If problem still persists, please make note of it in this bug report.

Comment 26 Paul DeStefano 2021-07-19 16:39:07 UTC
I (think) have the appropriate update, but problem just occurred.  After a recent reboot, I started a VM and then, as I was adding USB devices to the VM, the connection to the VM failed, window to VM (spice) disappeared.  VM was running, I just connected again, error did not recur.

In any case, the error message still shows up, contrary to what is reported in advisory.  Is this

Jul 19 08:58:41 <hostname> libvirtd[1998455]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng_apply
Jul 19 08:58:41 <hostname> qemu-system-x86_64[1998473]: Could not find keytab file: /etc/qemu/krb5.tab
Jul 19 08:58:41 <hostname> qemu-system-x86_64[1998473]: gssapiv2
Jul 19 08:58:41 <hostname> qemu-system-x86_64[1998473]: _sasl_plugin_load failed on sasl_server_plug_init
Jul 19 08:58:45 <hostname> kernel: usb 5-4: reset high-speed USB device number 4 using xhci_hcd
Jul 19 08:58:48 <hostname> kernel: usb 5-4: reset high-speed USB device number 4 using xhci_hcd
Jul 19 08:58:52 <hostname> gsd-media-keys[8880]: Unable to get default sink
Jul 19 08:58:52 <hostname> gsd-media-keys[8880]: Unable to get default source
Jul 19 08:58:52 <hostname> kernel: usb 5-2: reset full-speed USB device number 2 using xhci_hcd
Jul 19 08:58:53 <hostname> kernel: usb 5-2: reset full-speed USB device number 2 using xhci_hcd
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor Gjs_ui_windowPreview_WindowPreview is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterClone is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor StIcon is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: Can't update stage views actor ClutterActor is on because it needs an allocation.
Jul 19 08:58:58 <hostname> gnome-shell[8095]: _st_create_shadow_pipeline_from_actor: assertion 'clutter_actor_has_allocation (actor)' failed
Jul 19 08:59:01 <hostname> kernel: usb 5-4: reset high-speed USB device number 4 using xhci_hcd
Jul 19 08:59:01 <hostname> kernel: usb 5-4: reset high-speed USB device number 4 using xhci_hcd
Jul 19 08:59:29 <hostname> libvirtd[16385]: internal error: connection closed due to keepalive timeout
Jul 19 08:59:32 <hostname> systemd[7892]: Reached target Sound Card.
Jul 19 08:59:32 <hostname> kernel: input: SteelSeries  SteelSeries Arctis 7 Consumer Control as /devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb5/5-2/5-2:1.5/0003:1038:12AD.0012/input/input37
Jul 19 08:59:33 <hostname> kernel: input: SteelSeries  SteelSeries Arctis 7 as /devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb5/5-2/5-2:1.5/0003:1038:12AD.0012/input/input39
Jul 19 08:59:33 <hostname> kernel: hid-generic 0003:1038:12AD.0012: input,hiddev97,hidraw1: USB HID v1.11 Device [SteelSeries  SteelSeries Arctis 7] on usb-0000:0d:00.3-2/input5
Jul 19 08:59:33 <hostname> pipewire-pulse[8719]: node 0x55f7ee42dcd0: set_param Spa:Enum:ParamId:PortConfig (11) 0x55f7edb80d98: Input/output error
Jul 19 08:59:33 <hostname> pipewire[8718]: impl-core 0x56086b87eb60: error -5 for resource 58: node_set_param(Spa:Enum:ParamId:PortConfig) failed: Input/output error
Jul 19 08:59:33 <hostname> pipewire[8718]: client-node 0x56086bc738d0: error seq:847 -5 (node_set_param(Spa:Enum:ParamId:PortConfig) failed: Input/output error)
Jul 19 08:59:33 <hostname> pipewire-pulse[8719]: node 0x55f7ee42dcd0: set_param Spa:Enum:ParamId:PortConfig (11) 0x55f7edb80468: Input/output error
Jul 19 08:59:33 <hostname> pipewire[8718]: impl-core 0x56086b87eb60: error -5 for resource 58: node_set_param(Spa:Enum:ParamId:PortConfig) failed: Input/output error
Jul 19 08:59:33 <hostname> pipewire[8718]: client-node 0x56086bc738d0: error seq:875 -5 (node_set_param(Spa:Enum:ParamId:PortConfig) failed: Input/output error)

Comment 27 Michal Privoznik 2021-07-20 09:09:05 UTC
(In reply to Paul DeStefano from comment #26)
> I (think) have the appropriate update, but problem just occurred.

What's your libvirt version? The fix is in libvirt-7.0.0-6.fc34 and according to comments it fixed the journalctl issue:

https://bodhi.fedoraproject.org/updates/FEDORA-2021-bc6ad65da0

> After a
> recent reboot, I started a VM and then, as I was adding USB devices to the
> VM, the connection to the VM failed, window to VM (spice) disappeared.  VM
> was running, I just connected again, error did not recur.

This seems unrelated problem. If you want, please open a separate bug for it and attach debug logs to it.

https://libvirt.org/kbase/debuglogs.html

Comment 28 Paul DeStefano 2021-07-20 17:19:33 UTC
Hmm, I don't have libvirt installed.  Is that strange?  libvirtd isn't in libvirt pkg, it's in libvirt-deamon, which I have.

libvirt-daemon-7.0.0-6.fc34.x86_64

I have version 7.0.0-6.fc34 of all the libvirt-* pkgs that I have installed.

This bug title is an exact match for what I'm reporting.  Now, I may have another problem, but the fact is this error is still occurring, possibly because the fix is not comprehensive.  Maybe you are saying this bug report goes to a specific bug that has been fixed, even though the reported problem is not fixed.  I'm happy to open a new report if you want, but I thought this was the most logical place.  If not, that's fine, but I don't understand.

Comment 29 Michal Privoznik 2021-07-20 19:13:17 UTC
(In reply to Paul DeStefano from comment #28)
> Hmm, I don't have libvirt installed.  Is that strange?  libvirtd isn't in
> libvirt pkg, it's in libvirt-deamon, which I have.
> 
> libvirt-daemon-7.0.0-6.fc34.x86_64
> 
> I have version 7.0.0-6.fc34 of all the libvirt-* pkgs that I have installed.
> 

Alright. So you do have the correct version. I wonder whether you perhaps have an older daemon running, even though RPM should have restarted it on upgrade. Oh, are you using the LXC driver by any chance? Quick skim over our codebase revealed capng_apply() hiding there.

> This bug title is an exact match for what I'm reporting.  Now, I may have
> another problem, but the fact is this error is still occurring, possibly
> because the fix is not comprehensive.  Maybe you are saying this bug report
> goes to a specific bug that has been fixed, even though the reported problem
> is not fixed.  I'm happy to open a new report if you want, but I thought
> this was the most logical place.  If not, that's fine, but I don't
> understand.

Right, this bug is for capng_apply problem. I thought you wanted to use it to fix USB device attach problem. My bad.

Comment 30 Paul DeStefano 2021-07-20 21:49:19 UTC
I see what you are saying, but this was after a system reboot.  So, I can't imagine it could have been the old daemon.

Hmm, I don't know what you mean by LXC driver, so I assume that's a 'no'.

Sorry, I included the USB msgs only because the crash happened when was passingly USB devices through to the booting VM when this happened, so, I couldn't be sure it was not related.  We can let it ride; maybe it will not happen after the next update/reboot I do.  Let me know if you think it should report back here or if you still think I should start a new bug.

Comment 31 Michal Privoznik 2021-07-21 10:55:51 UTC
(In reply to Paul DeStefano from comment #30)
> I see what you are saying, but this was after a system reboot.  So, I can't
> imagine it could have been the old daemon.
> 
> Hmm, I don't know what you mean by LXC driver, so I assume that's a 'no'.

Yeah. But just to be sure 'virsh -c lxc:/// list' should print nothing.

But if it's after reboot then it's definitely upgraded daemon. I wonder whether libvirtd doesn't have CAP_SETPCAP to begin with. Can you plase share output of:

cat /proc/$(pgrep libvirtd)/status

Comment 32 Paul DeStefano 2021-07-21 19:25:58 UTC
Thanks for your continued help and patience, Michal.

$ sudo -i virsh -c lxc:/// list
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/virtlxcd-sock': No such file or directory

$ cat /proc/$(pgrep libvirtd)/status
Name:	libvirtd
Umask:	0077
State:	S (sleeping)
Tgid:	16385
Ngid:	0
Pid:	16385
PPid:	7892
TracerPid:	0
Uid:	13013	13013	13013	13013
Gid:	13013	13013	13013	13013
FDSize:	64
Groups:	4 10 11 18 39 54 938 982 985 13013 
NStgid:	16385
NSpid:	16385
NSpgid:	16384
NSsid:	16384
VmPeak:	 1815888 kB
VmSize:	 1750576 kB
VmLck:	       0 kB
VmPin:	       0 kB
VmHWM:	   55692 kB
VmRSS:	   43104 kB
RssAnon:	   18732 kB
RssFile:	   24372 kB
RssShmem:	       0 kB
VmData:	  324216 kB
VmStk:	     964 kB
VmExe:	     236 kB
VmLib:	   37248 kB
VmPTE:	     376 kB
VmSwap:	     496 kB
HugetlbPages:	       0 kB
CoreDumping:	0
THP_enabled:	1
Threads:	21
SigQ:	6/127980
SigPnd:	0000000000000000
ShdPnd:	0000000000000000
SigBlk:	0000000000000000
SigIgn:	0000000000001000
SigCgt:	0000000180004007
CapInh:	0000000000000000
CapPrm:	0000000000000000
CapEff:	0000000000000000
CapBnd:	000001ffffffffff
CapAmb:	0000000000000000
NoNewPrivs:	0
Seccomp:	0
Seccomp_filters:	0
Speculation_Store_Bypass:	thread vulnerable
SpeculationIndirectBranch:	conditional enabled
Cpus_allowed:	ffffffff
Cpus_allowed_list:	0-31
Mems_allowed:	00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:	0
voluntary_ctxt_switches:	278492
nonvoluntary_ctxt_switches:	919

Comment 33 Kappa 2021-07-22 06:15:19 UTC
I have upgraded to libvirt-daemon-7.0.0-6 for a few days.
I still see the alert in the logs.

This is from system journal. I used cronjob to run virsh to startup a VM.

Jul 22 05:44:35 server1 virsh[25459]: libcap-ng used by "/usr/bin/virsh" failed due to not having CAP_SETPCAP in capng_apply
Jul 22 05:44:35 server1 libvirtd[25480]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng>
Jul 22 05:44:35 server1 libvirtd[25481]: libcap-ng used by "/usr/sbin/libvirtd" failed due to not having CAP_SETPCAP in capng>
Jul 22 05:44:35 server1 virsh[25459]: libvirt version: 7.0.0, package: 6.fc34 (Fedora Project, 2021-07-02-20:38:33, )

Comment 34 Michal Privoznik 2021-07-22 07:22:58 UTC
(In reply to Paul DeStefano from comment #32)
> Thanks for your continued help and patience, Michal.
> 
> $ sudo -i virsh -c lxc:/// list
> error: failed to connect to the hypervisor
> error: Failed to connect socket to '/var/run/libvirt/virtlxcd-sock': No such
> file or directory
> 

Alright, so the error doesn't come from LXC driver. But ...

> $ cat /proc/$(pgrep libvirtd)/status
> Name:	libvirtd

> Uid:	13013	13013	13013	13013
> Gid:	13013	13013	13013	13013

> CapInh:	0000000000000000
> CapPrm:	0000000000000000
> CapEff:	0000000000000000
> CapBnd:	000001ffffffffff
> CapAmb:	0000000000000000

This says it all. This is a session daemon and as such has no capabilities. Regardless, when libvirt starts and looks around the system executing various binaries to learn their capabilities (like dnsmasq, qemu-*), it does so attempting to drop capabilities (which makes sense if libvirtd is ran as root). And in case of session daemon it has no capabilities to start with (and in particular it doesn't have CAP_SETPCAP), hence the error. Alright, let me see if I can fix this. Meanwhile, I'm reopening this.

Comment 35 Michal Privoznik 2021-07-22 10:24:16 UTC
Steve,

I'm looking at the fedora only patch, that introduced logging, but it did so only for CAPNG_SELECT_BOUNDS. What is the reason? IMO calling capng_apply() with whatever argument must fail if the process doesn't have CAP_SETPCAP, mustn't it (and capng_apply() would actually want to change something, if it's NO-OP then it shouldn't fail)? Speaking of which, the fedora patch looks a bit harsh. It prints error even when capng_apply() is NO-OP; for instance in this case:

#include <stdio.h>
#include <cap-ng.h>

int main(int argc, char *argv[]) {
    capng_get_caps_process();
    capng_apply(CAPNG_SELECT_BOUNDS);
    return 0;
}


Nevertheless, there's just one capng_apply(CAPNG_SELECT_BOUNDS); call in Libvirt and I can fix that. Just want to know whether there are plans for logging other types.

Comment 36 Michal Privoznik 2021-07-22 15:31:16 UTC
Patches proposed on the list:

https://listman.redhat.com/archives/libvir-list/2021-July/msg00709.html

I've made a scratch build with them here:

https://koji.fedoraproject.org/koji/taskinfo?taskID=72411476

Comment 37 Steve Grubb 2021-07-22 16:11:59 UTC
Michal,

The issue is the bounding set. To touch it requires CAP_SET_PCAP. You can drop normal capabilities without CAP_SET_PCAP. So, no warning is necessary because capabilities work as expected. (The test program is not exactly a nop because libcap-ng simply does what you ask without any analysis. Meaning, if you ask to do anything with the bounding set without CAP_SET_PCAP, it will result in an error from the kernel.) Note that the Fedora patch removes the actual failed state which is different from how upstream works. Instead, it warns that it failed without returning a failed result. This is to allow programs to fix themselves before error codes are returned and programs exit as a result. I plan to remove the Fedora patch when I'm reasonably sure that major apps have been updated.

Comment 38 Fedora Update System 2021-07-27 18:48:54 UTC
FEDORA-2021-bce7f9b98c has been submitted as an update to Fedora 34. https://bodhi.fedoraproject.org/updates/FEDORA-2021-bce7f9b98c

Comment 39 Fedora Update System 2021-07-28 01:29:15 UTC
FEDORA-2021-bce7f9b98c has been pushed to the Fedora 34 testing repository.
Soon you'll be able to install the update with the following command:
`sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2021-bce7f9b98c`
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2021-bce7f9b98c

See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.

Comment 40 Steve Hayes 2021-08-03 14:38:54 UTC
@sgrubb@redhat.com re your comment last month about being reasonably sure that major apps have been updated.  I just spotted this occuring, harmlessly I think, with dhcrelay on F34. 

Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: Dropped all unnecessary capabilities.
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: Internet Systems Consortium DHCP Relay Agent 4.4.2b1
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: Copyright 2004-2019 Internet Systems Consortium.
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: All rights reserved.
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: For info, please visit https://www.isc.org/software/dhcp/
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: Listening on LPF/ens3/52:54:00:02:03:01
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: Sending on   LPF/ens3/52:54:00:02:03:01
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: Sending on   Socket/fallback
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: libcap-ng used by "/usr/sbin/dhcrelay" failed due to not having CAP_SETPCAP in capng_apply
Aug 03 15:26:19 phineas.purplehayes.uk dhcrelay[30406]: Dropped all capabilities.
Aug 03 15:26:19 phineas.purplehayes.uk audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dhcrelay comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 03 15:26:19 phineas.purplehayes.uk systemd[1]: Started DHCP Relay Agent Daemon.
░░ Subject: A start job for unit dhcrelay.service has finished successfully
░░ Defined-By: systemd


Hope it helps...

Steve

Comment 41 Steve Grubb 2021-08-04 21:02:50 UTC
Thanks for the info

Comment 42 Scott Williams 2021-08-18 16:15:35 UTC
This fixed it for me.  I'll give a +1 in bodhi.  Thanks!

Comment 43 essin 2021-09-07 05:13:29 UTC
Linux fedora 5.13.13-200.fc34.x86_64 #1 SMP Thu Aug 26 17:06:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux


I just tried to create a simple service to mount my cifs volumes since putting the entries in fstab doesn't seem to work on any linux.

This is the script:
mount-servers.sh 
#!/bin/bash

mount -t cifs //192.168.1.200/Drive_E /mnt/server-e -o credentials=/root/.smbcredentials_server,nounix,dir_mode=0777,file_mode=0777
mount -t cifs //192.168.1.200/Drive_E/Dropbox /mnt/dropbox -o credentials=/root/.smbcredentials_server,nounix,dir_mode=0777,file_mode=0777
mount -t cifs //192.168.1.200/Drive_D /mnt/server-d -o credentials=/root/.smbcredentials_server,nounix,dir_mode=0777,file_mode=0777

This is the service:
[Unit]
Description=mount-servers systemd service.

[Service]
Type=simple
ExecStart=/bin/bash /usr/bin/mount-servers.sh

[Install]
WantedBy=multi-user.target



I got these messages when starting it:
× servers.service - mount-servers systemd service.
     Loaded: loaded (/etc/systemd/system/servers.service; enabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Mon 2021-09-06 22:03:26 PDT; 15s ago
    Process: 6951 ExecStart=/bin/bash /usr/bin/mount-servers.sh (code=exited, status=32)
   Main PID: 6951 (code=exited, status=32)
        CPU: 14ms

Sep 06 22:03:26 fedora bash[6958]: mount error(16): Device or resource busy
Sep 06 22:03:26 fedora bash[6958]: Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
Sep 06 22:03:26 fedora mount.cifs[6962]: libcap-ng used by "/usr/sbin/mount.cifs" failed due to not having CAP_SETPCAP in capng_apply
Sep 06 22:03:26 fedora bash[6961]: mount error(16): Device or resource busy
Sep 06 22:03:26 fedora bash[6961]: Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
Sep 06 22:03:26 fedora bash[6963]: mount: /mnt/data: /dev/sdb1 already mounted on /mnt/backup.
Sep 06 22:03:26 fedora bash[6964]: mount: /mnt/backup: /dev/sdc1 already mounted on /mnt/data.
Sep 06 22:03:26 fedora bash[6965]: mount: /mnt/timeshift: /dev/sdd1 already mounted on /mnt/timeshift.
Sep 06 22:03:26 fedora systemd[1]: servers.service: Main process exited, code=exited, status=32/n/a
Sep 06 22:03:26 fedora systemd[1]: servers.service: Failed with result 'exit-code'.

If the problem is supposed to have been corrected, perhaps there is something that I overlooked but it still seems broken to me.

Comment 44 Steve Grubb 2021-09-07 20:45:30 UTC
(In reply to essin from comment #43)
> Linux fedora 5.13.13-200.fc34.x86_64 #1 SMP Thu Aug 26 17:06:39 UTC 2021
> x86_64 x86_64 x86_64 GNU/Linux
> 
> 
> I just tried to create a simple service to mount my cifs volumes since
> putting the entries in fstab doesn't seem to work on any linux.

This bug is for libvirtd, a new one needs to be opened for mount.cifs or the maintainer won't know there is a problem. Thanks.

Comment 45 essin 2021-09-07 21:04:52 UTC
Thanks.
I'm sorry, but I'm new here and I don't know how to do what you suggest. I've tried the obvious, simple-minded things but they don't work. I'm ask to identify a component but I don't know what to choose.

Please help.

Comment 46 Timothée Ravier 2021-09-08 12:20:39 UTC
(In reply to essin from comment #45)
> I'm sorry, but I'm new here and I don't know how to do what you suggest.
> I've tried the obvious, simple-minded things but they don't work. I'm ask to
> identify a component but I don't know what to choose.

Find which package includes the program you are having the issue with and report a bug for that package.

Comment 47 Michal Privoznik 2021-09-08 14:03:15 UTC
Running `dnf provides "*/mount.cifs"` shows the binary belongs to cifs-utils which is also the name of the corresponding component. Hope this helps.

Comment 48 Fedora Update System 2021-09-30 01:12:50 UTC
FEDORA-2021-bce7f9b98c has been pushed to the Fedora 34 stable repository.
If problem still persists, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.