Description of problem: In k8s and some docker setups virtiofsd wont start; this is due to CAP_DAC_READ_SEARCH being requested which some runtimes disallow. We've already got a fix upstream - 1c7cb1f52e2577e190c09c9a14e6b6f56f4a3ec3 (and which the next rebase will get us), Needed for CNV 2.6 Version-Release number of selected component (if applicable): QEMU 5.1.0-13 How reproducible: 100%? Steps to Reproduce: 1. Not 100% clear; but starting kubevirt seems to do it 2. 3. Actual results: error about capng_apply failing Expected results: No error Additional info:
Is there an easier way to reproduce it on qemu side rather than on kubevirt ? If so, could you please list some steps? Thanks Menghuan
(In reply to menli from comment #4) > Is there an easier way to reproduce it on qemu side rather than on kubevirt > ? If so, could you please list some steps? > The way I just did it was: bash# capsh --print Current: = cap_chown,cap_dac_override,cap_dac_read_search,..... bash# capsh --drop=cap_dac_read_search -- bash# capsh --print Current: = cap_chown,cap_dac_override,cap_fowner,.... bash# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o source=/dev/shm/linux -o cache=none --thread-pool-size=1 -o log_level=debug (in another shell) /usr/libexec/qemu-kvm -M pc,memory-backend=mem,accel=kvm -smp 8 -cpu host -m 32G,maxmem=64G,slots=1 -object memory-backend-memfd,id=mem,size=32G,share=on -drive if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -chardev socket,id=char0,path=/tmp/vhostqemu -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=kernel then that fails with: [6644048724055859] [ID: 01043843] virtio_session_mount: Waiting for vhost-user socket connection... [6644054823751779] [ID: 01043843] virtio_session_mount: Received vhost-user socket connection [6644054825824024] [ID: 00000001] setup_capabilities: capng_apply failed so then upgrade and try again. > Thanks > > Menghuan
(In reply to Dr. David Alan Gilbert from comment #5) > (In reply to menli from comment #4) > > Is there an easier way to reproduce it on qemu side rather than on kubevirt > > ? If so, could you please list some steps? > > > > The way I just did it was: > > bash# capsh --print > Current: = cap_chown,cap_dac_override,cap_dac_read_search,..... > bash# capsh --drop=cap_dac_read_search -- > bash# capsh --print > Current: = cap_chown,cap_dac_override,cap_fowner,.... > > bash# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o > source=/dev/shm/linux -o cache=none --thread-pool-size=1 -o log_level=debug > > (in another shell) /usr/libexec/qemu-kvm -M pc,memory-backend=mem,accel=kvm > -smp 8 -cpu host -m 32G,maxmem=64G,slots=1 -object > memory-backend-memfd,id=mem,size=32G,share=on -drive > if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -chardev > socket,id=char0,path=/tmp/vhostqemu -device > vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=kernel > > then that fails with: > [6644048724055859] [ID: 01043843] virtio_session_mount: Waiting for > vhost-user socket connection... > [6644054823751779] [ID: 01043843] virtio_session_mount: Received vhost-user > socket connection > [6644054825824024] [ID: 00000001] setup_capabilities: capng_apply failed > > so then upgrade and try again. > > > > > > Thanks > > > > Menghuan Thanks for your support~ reproduce it with steps above, the result is: [root@dell-per730-48 ~]# capsh --drop=cap_dac_read_search -- [root@dell-per730-48 ~]# capsh --print Current: = cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read,38,39+ep Bounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read,38,39 Ambient set = Securebits: 00/0x0/1'b0 secure-noroot: no (unlocked) secure-no-suid-fixup: no (unlocked) secure-keep-caps: no (unlocked) secure-no-ambient-raise: no (unlocked) uid=0(root) gid=0(root) groups=0(root) [root@dell-per730-48 ~]# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o source=/dev/shm/ -o cache=none --thread-pool-size=1 -o log_level=debug [58871212556458] [ID: 00020754] virtio_session_mount: Waiting for vhost-user socket connection... [58875112760626] [ID: 00020754] virtio_session_mount: Received vhost-user socket connection [58875121075455] [ID: 00000001] setup_capabilities: capng_apply
ITM4 is Mon 2020-11-30 , now the bug is still in post status, could we set a new ITM instead? Thanks Menghuan
(In reply to menli from comment #6) > (In reply to Dr. David Alan Gilbert from comment #5) > > (In reply to menli from comment #4) > > > Is there an easier way to reproduce it on qemu side rather than on kubevirt > > > ? If so, could you please list some steps? > > > > > > > The way I just did it was: > > > > bash# capsh --print > > Current: = cap_chown,cap_dac_override,cap_dac_read_search,..... > > bash# capsh --drop=cap_dac_read_search -- > > bash# capsh --print > > Current: = cap_chown,cap_dac_override,cap_fowner,.... > > > > bash# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o > > source=/dev/shm/linux -o cache=none --thread-pool-size=1 -o log_level=debug > > > > (in another shell) /usr/libexec/qemu-kvm -M pc,memory-backend=mem,accel=kvm > > -smp 8 -cpu host -m 32G,maxmem=64G,slots=1 -object > > memory-backend-memfd,id=mem,size=32G,share=on -drive > > if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -chardev > > socket,id=char0,path=/tmp/vhostqemu -device > > vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=kernel > > > > then that fails with: > > [6644048724055859] [ID: 01043843] virtio_session_mount: Waiting for > > vhost-user socket connection... > > [6644054823751779] [ID: 01043843] virtio_session_mount: Received vhost-user > > socket connection > > [6644054825824024] [ID: 00000001] setup_capabilities: capng_apply failed > > > > so then upgrade and try again. > > > > > > > > > > > Thanks > > > > > > Menghuan > > > > Thanks for your support~ > > reproduce it with steps above, the result is: > > [root@dell-per730-48 ~]# capsh --drop=cap_dac_read_search -- > [root@dell-per730-48 ~]# capsh --print > Current: = > cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid, > cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service, > cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner, > cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct, > cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time, > cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control, > cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm, > cap_block_suspend,cap_audit_read,38,39+ep > Bounding set > =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid, > cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service, > cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner, > cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct, > cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time, > cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control, > cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm, > cap_block_suspend,cap_audit_read,38,39 > Ambient set = > Securebits: 00/0x0/1'b0 > secure-noroot: no (unlocked) > secure-no-suid-fixup: no (unlocked) > secure-keep-caps: no (unlocked) > secure-no-ambient-raise: no (unlocked) > uid=0(root) > gid=0(root) > groups=0(root) > [root@dell-per730-48 ~]# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu > -o source=/dev/shm/ -o cache=none --thread-pool-size=1 -o log_level=debug > [58871212556458] [ID: 00020754] virtio_session_mount: Waiting for vhost-user > socket connection... > [58875112760626] [ID: 00020754] virtio_session_mount: Received vhost-user > socket connection > [58875121075455] [ID: 00000001] setup_capabilities: capng_apply sorry, it should be: [58875121075455] [ID: 00000001] setup_capabilities: capng_apply failed
(In reply to menli from comment #8) > (In reply to menli from comment #6) > > (In reply to Dr. David Alan Gilbert from comment #5) > > > (In reply to menli from comment #4) > > > > Is there an easier way to reproduce it on qemu side rather than on kubevirt > > > > ? If so, could you please list some steps? > > > > > > > > > > The way I just did it was: > > > > > > bash# capsh --print > > > Current: = cap_chown,cap_dac_override,cap_dac_read_search,..... > > > bash# capsh --drop=cap_dac_read_search -- > > > bash# capsh --print > > > Current: = cap_chown,cap_dac_override,cap_fowner,.... > > > > > > bash# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o > > > source=/dev/shm/linux -o cache=none --thread-pool-size=1 -o log_level=debug > > > > > > (in another shell) /usr/libexec/qemu-kvm -M pc,memory-backend=mem,accel=kvm > > > -smp 8 -cpu host -m 32G,maxmem=64G,slots=1 -object > > > memory-backend-memfd,id=mem,size=32G,share=on -drive > > > if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -chardev > > > socket,id=char0,path=/tmp/vhostqemu -device > > > vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=kernel > > > > > > then that fails with: > > > [6644048724055859] [ID: 01043843] virtio_session_mount: Waiting for > > > vhost-user socket connection... > > > [6644054823751779] [ID: 01043843] virtio_session_mount: Received vhost-user > > > socket connection > > > [6644054825824024] [ID: 00000001] setup_capabilities: capng_apply failed > > > > > > so then upgrade and try again. > > > > > > > > > > > > > > > > Thanks > > > > > > > > Menghuan > > > > > > > > Thanks for your support~ > > > > reproduce it with steps above, the result is: > > > > [root@dell-per730-48 ~]# capsh --drop=cap_dac_read_search -- > > [root@dell-per730-48 ~]# capsh --print > > Current: = > > cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid, > > cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service, > > cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner, > > cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct, > > cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time, > > cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control, > > cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm, > > cap_block_suspend,cap_audit_read,38,39+ep > > Bounding set > > =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid, > > cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service, > > cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner, > > cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct, > > cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time, > > cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control, > > cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm, > > cap_block_suspend,cap_audit_read,38,39 > > Ambient set = > > Securebits: 00/0x0/1'b0 > > secure-noroot: no (unlocked) > > secure-no-suid-fixup: no (unlocked) > > secure-keep-caps: no (unlocked) > > secure-no-ambient-raise: no (unlocked) > > uid=0(root) > > gid=0(root) > > groups=0(root) > > [root@dell-per730-48 ~]# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu > > -o source=/dev/shm/ -o cache=none --thread-pool-size=1 -o log_level=debug > > [58871212556458] [ID: 00020754] virtio_session_mount: Waiting for vhost-user > > socket connection... > > [58875112760626] [ID: 00020754] virtio_session_mount: Received vhost-user > > socket connection > > [58875121075455] [ID: 00000001] setup_capabilities: capng_apply > > sorry, it should be: > [58875121075455] [ID: 00000001] setup_capabilities: capng_apply failed OK, and now try it with the 8.4.0 build?
So this fix have added to the lastest 8.4.0 qemu build, right? if so, could you change the bug status to modify , thanks As mentioned in comment 7, ITM4 is Mon 2020-11-30 , now the bug is in post status, could we set a new ITM instead? Thanks Menghuan
(In reply to Dr. David Alan Gilbert from comment #9) > (In reply to menli from comment #8) > > (In reply to menli from comment #6) > > > (In reply to Dr. David Alan Gilbert from comment #5) > > > > (In reply to menli from comment #4) > > > > > Is there an easier way to reproduce it on qemu side rather than on kubevirt > > > > > ? If so, could you please list some steps? > > > > > > > > > > > > > The way I just did it was: > > > > > > > > bash# capsh --print > > > > Current: = cap_chown,cap_dac_override,cap_dac_read_search,..... > > > > bash# capsh --drop=cap_dac_read_search -- > > > > bash# capsh --print > > > > Current: = cap_chown,cap_dac_override,cap_fowner,.... > > > > > > > > bash# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o > > > > source=/dev/shm/linux -o cache=none --thread-pool-size=1 -o log_level=debug > > > > > > > > (in another shell) /usr/libexec/qemu-kvm -M pc,memory-backend=mem,accel=kvm > > > > -smp 8 -cpu host -m 32G,maxmem=64G,slots=1 -object > > > > memory-backend-memfd,id=mem,size=32G,share=on -drive > > > > if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -chardev > > > > socket,id=char0,path=/tmp/vhostqemu -device > > > > vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=kernel > > > > > > > > then that fails with: > > > > [6644048724055859] [ID: 01043843] virtio_session_mount: Waiting for > > > > vhost-user socket connection... > > > > [6644054823751779] [ID: 01043843] virtio_session_mount: Received vhost-user > > > > socket connection > > > > [6644054825824024] [ID: 00000001] setup_capabilities: capng_apply failed > > > > > > > > so then upgrade and try again. > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Menghuan > > > > > > > > > > > > Thanks for your support~ > > > > > > reproduce it with steps above, the result is: > > > > > > [root@dell-per730-48 ~]# capsh --drop=cap_dac_read_search -- > > > [root@dell-per730-48 ~]# capsh --print > > > Current: = > > > cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid, > > > cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service, > > > cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner, > > > cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct, > > > cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time, > > > cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control, > > > cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm, > > > cap_block_suspend,cap_audit_read,38,39+ep > > > Bounding set > > > =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid, > > > cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service, > > > cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner, > > > cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct, > > > cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time, > > > cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control, > > > cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm, > > > cap_block_suspend,cap_audit_read,38,39 > > > Ambient set = > > > Securebits: 00/0x0/1'b0 > > > secure-noroot: no (unlocked) > > > secure-no-suid-fixup: no (unlocked) > > > secure-keep-caps: no (unlocked) > > > secure-no-ambient-raise: no (unlocked) > > > uid=0(root) > > > gid=0(root) > > > groups=0(root) > > > [root@dell-per730-48 ~]# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu > > > -o source=/dev/shm/ -o cache=none --thread-pool-size=1 -o log_level=debug > > > [58871212556458] [ID: 00020754] virtio_session_mount: Waiting for vhost-user > > > socket connection... > > > [58875112760626] [ID: 00020754] virtio_session_mount: Received vhost-user > > > socket connection > > > [58875121075455] [ID: 00000001] setup_capabilities: capng_apply > > > > sorry, it should be: > > [58875121075455] [ID: 00000001] setup_capabilities: capng_apply failed > > OK, and now try it with the 8.4.0 build? reproduce this issue like comment 5 , the result is comment 6 qemu version: qemu-kvm-5.1.0-15.module+el8.3.1+8772+a3fdeccd.x86_64 Test it on 8.4.0 build qemu-kvm-5.2.0-0.module+el8.4.0+8855+a9e237a9.x86_64 the result like following, so this issue fixed. [root@dell-per440-01 test]# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o source=/dev/shm/ -o cache=none --thread-pool-size=1 -o log_level=debug [12919128179052] [ID: 00016166] virtio_session_mount: Waiting for vhost-user socket connection... [12922964272018] [ID: 00016166] virtio_session_mount: Received vhost-user socket connection [12922973126483] [ID: 00000001] virtio_loop: Entry [12922973149440] [ID: 00000001] virtio_loop: Waiting for VU event [12923086608421] [ID: 00000001] virtio_loop: Got VU event [12923086639979] [ID: 00000001] virtio_loop: Waiting for VU event [12923086653791] [ID: 00000001] virtio_loop: Got VU event [12923086667551] [ID: 00000001] virtio_loop: Waiting for VU event [12923086680037] [ID: 00000001] virtio_loop: Got VU event [12923086689112] [ID: 00000001] virtio_loop: Waiting for VU event [12923086694281] [ID: 00000001] virtio_loop: Got VU event [12923086713737] [ID: 00000001] virtio_loop: Waiting for VU event [12923086728671] [ID: 00000001] virtio_loop: Got VU event
Mirek, Can it move to MODIFIED? based on comment#5
(In reply to Amnon Ilan from comment #12) > Mirek, Can it move to MODIFIED? based on comment#5 Not sure it's for modified. Is there fix identified? we can refer too?
Fix is already upstream, it was posted for backport here: http://post-office.corp.redhat.com/archives/rhvirt-patches/2020-November/msg01533.html
(In reply to Dr. David Alan Gilbert from comment #14) > Fix is already upstream, > > it was posted for backport here: > http://post-office.corp.redhat.com/archives/rhvirt-patches/2020-November/ > msg01533.html Thanks for pointer. Fix is in rebase so moving to modified.
refer to comment 11, change status to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:2098