RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2097209 - [virtiofs] mount virtiofs failed: SELinux: (dev virtiofs, type virtiofs) getxattr errno 111
Summary: [virtiofs] mount virtiofs failed: SELinux: (dev virtiofs, type virtiofs) getx...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: qemu-kvm
Version: 8.7
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Dr. David Alan Gilbert
QA Contact: xiagao
URL:
Whiteboard:
Depends On:
Blocks: 2089955
TreeView+ depends on / blocked
 
Reported: 2022-06-15 07:12 UTC by xiagao
Modified: 2022-11-08 09:45 UTC (History)
21 users (show)

Fixed In Version: qemu-kvm-6.2.0-16.module+el8.7.0+15743+c774064d
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-08 09:20:10 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/rhel/src/qemu-kvm qemu-kvm merge_requests 193 0 None None None 2022-06-16 08:48:50 UTC
IBM Linux Technology Center 198766 0 None None None 2022-06-24 05:52:47 UTC
Red Hat Issue Tracker RHELPLAN-125303 0 None None None 2022-06-15 07:17:49 UTC
Red Hat Product Errata RHSA-2022:7472 0 None None None 2022-11-08 09:21:14 UTC

Description xiagao 2022-06-15 07:12:01 UTC
Description of problem:

# mount -t virtiofs myfs /mnt/a
mount: /mnt/a: mount(2) system call failed: Connection refused.

demsg info:
SELinux: (dev virtiofs, type virtiofs) getxattr errno 111

# getenforce
Enforcing


Version-Release number of selected component (if applicable):
kernel-4.18.0-400.el8.x86_64(host/guest)
qemu-kvm-6.2.0-15.module+el8.7.0+15644+189a21f6.x86_64


How reproducible:
100%

Steps to Reproduce:
1. start virtiofsd on rhel870 host
# /usr/libexec/virtiofsd --socket-path=/tmp/virtiofsd.sock -o source=/home/test1,cache=always --debug
virtio_session_mount: Waiting for vhost-user socket connection...
virtio_session_mount: Received vhost-user socket connection
virtio_loop: Entry
fv_queue_set_started: qidx=0 started=1
fv_queue_thread: Start for queue 0 kick_fd 9
fv_queue_set_started: qidx=1 started=1
fv_queue_thread: Start for queue 1 kick_fd 12

2. boot up rhel870 guest with virtiofs device
    -m 4096 \
    -object memory-backend-file,mem-path=/dev/shm,share=yes,size=4G,id=mem-mem1  \
    -chardev socket,id=char_virtiofs_fs,path=/tmp/virtiofsd.sock \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device vhost-user-fs-pci,id=vufs_virtiofs_fs,chardev=char_virtiofs_fs,tag=myfs,queue-size=1024,bus=pcie-root-port-4,addr=0x0 \

3. mount virtiofs in guest.

Actual results:
# mount -t virtiofs myfs /mnt/a
mount: /mnt/a: mount(2) system call failed: Connection refused.

demsg info:
SELinux: (dev virtiofs, type virtiofs) getxattr errno 111

Expected results:
mount successfully

Additional info:

Comment 1 Yanan Fu 2022-06-15 07:23:44 UTC
This is a scenario in qemu-kvm component gating test can block the gating pass, so add TestBlocker, thanks!

Comment 4 xiagao 2022-06-15 07:46:10 UTC
Pass on last version: qemu-kvm-core-6.2.0-14.module+el8.7.0+15289+26b4351e.x86_64

Comment 5 Vivek Goyal 2022-06-15 12:28:07 UTC
So, IIUC, qemu-kvm-core-6.2.0-14.module+el8.7.0+15289+26b4351e.x86_64 works but qemu-kvm-6.2.0-15.module+el8.7.0+15644+189a21f6.x86_64 fails? Right? While guest kernel remains the same on host and guest (kernel-4.18.0-400.el8.x86_64(host/guest)).

Please confirm. If that's true, it points to some change going in qemu.

Comment 6 Dr. David Alan Gilbert 2022-06-15 12:55:26 UTC
That's a weird error; if It is between -14 and -15 I don't see any obvious cause:
http://pkgs.devel.redhat.com/cgit/rpms/qemu-kvm/commit/?h=stream-rhel-rhel-8.7.0&id=907a8f8fa54eba8f7e86e74c62da6542a2e42ad5

Comment 7 Dr. David Alan Gilbert 2022-06-15 14:26:28 UTC
I can confirm this on my test box.

Comment 8 Dr. David Alan Gilbert 2022-06-15 14:30:42 UTC
[(null)] [ID: 00000004] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 80 out: 56
[(null)] [ID: 00000004] fv_queue_worker: elem 0: with 2 out desc of length 56
[(null)] [ID: 00000004] unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
[(null)] [ID: 00000004]    unique: 2, error: -22 (Invalid argument), outsize: 16
[(null)] [ID: 00000004] virtio_send_msg: elem 0: with 2 in desc of length 80
[(null)] [ID: 00000004] fv_queue_thread: Waiting for Queue 1 event
[(null)] [ID: 00000001] virtio_loop: Got VU event

I wonder if this is the kernel headers update that broke it.

Comment 9 Dr. David Alan Gilbert 2022-06-15 14:34:45 UTC
I think the problem here is the kernel header update and we probably need to pull in qemu's:
commit 776dc4b1650062099df3cb4f90fa01c8e73eecfa
Author: Vivek Goyal <vgoyal>
Date:   Tue Feb 8 15:48:06 2022 -0500
 
    virtiofsd: Parse extended "struct fuse_init_in"
   
    Add some code to parse extended "struct fuse_init_in". And use a local
    variable "flag" to represent 64 bit flags. This will make it easier
    to add more features without having to worry about two 32bit flags (->flags
    and ->flags2) in "fuse_struct_in".
    
commit a086d54c6ffa38f7e71f182b63a25315304a3392
Author: Vivek Goyal <vgoyal>
Date:   Tue Feb 8 15:48:04 2022 -0500
 
    virtiofsd: Fix breakage due to fuse_init_in size change
    
    Kernel version 5.17 has increased the size of "struct fuse_init_in" struct.
    Previously this struct was 16 bytes and now it has been extended to
    64 bytes in size.

Comment 11 Vivek Goyal 2022-06-15 14:59:48 UTC
(In reply to Dr. David Alan Gilbert from comment #9)
> I think the problem here is the kernel header update and we probably need to
> pull in qemu's:
> commit 776dc4b1650062099df3cb4f90fa01c8e73eecfa
> Author: Vivek Goyal <vgoyal>
> Date:   Tue Feb 8 15:48:06 2022 -0500
>  
>     virtiofsd: Parse extended "struct fuse_init_in"
>    
>     Add some code to parse extended "struct fuse_init_in". And use a local
>     variable "flag" to represent 64 bit flags. This will make it easier
>     to add more features without having to worry about two 32bit flags
> (->flags
>     and ->flags2) in "fuse_struct_in".
>     
> commit a086d54c6ffa38f7e71f182b63a25315304a3392
> Author: Vivek Goyal <vgoyal>
> Date:   Tue Feb 8 15:48:04 2022 -0500
>  
>     virtiofsd: Fix breakage due to fuse_init_in size change
>     
>     Kernel version 5.17 has increased the size of "struct fuse_init_in"
> struct.
>     Previously this struct was 16 bytes and now it has been extended to
>     64 bytes in size.

If kernel headers have been updated in latest qemu, then it makes sense to pull
in these patches. We had realized that header updates will break things so
we have first committed these patches upstream before header update.

Comment 14 xiagao 2022-06-15 23:06:12 UTC
(In reply to Vivek Goyal from comment #5)
> So, IIUC, qemu-kvm-core-6.2.0-14.module+el8.7.0+15289+26b4351e.x86_64 works
> but qemu-kvm-6.2.0-15.module+el8.7.0+15644+189a21f6.x86_64 fails? Right?
> While guest kernel remains the same on host and guest
> (kernel-4.18.0-400.el8.x86_64(host/guest)).
> 
> Please confirm. If that's true, it points to some change going in qemu.

Yes, re-test with the same host kernel and guest kernel, qemu-kvm-core-6.2.0-15.module+el8.7.0+15644+189a21f6.x86_64 fail, while qemu-kvm-core-6.2.0-14.module+el8.7.0+15289+26b4351e.x86_64 works.

Kernel 4.18.0-395.el8.x86_64(guest)
kernel-4.18.0-398.el8.x86_64(host)

Comment 15 Min Deng 2022-06-16 03:29:22 UTC
Reproduce the similar issues on ppc64le on Power 8, so marked it as all architectures.
kernel-4.18.0-400.el8.ppc64le
qemu-kvm-6.2.0-15.module+el8.7.0+15644+189a21f6.ppc64le
SLOF-20210217-1.module+el8.6.0+12861+13975d62.noarch

Command line
/usr/libexec/qemu-kvm -name vm3 -sandbox on -machine pseries -nodefaults -device VGA,bus=pci.0,addr=0x2 -device i6300esb,bus=pci.0,addr=0x3 -watchdog-action reset -device pci-bridge,id=pci_bridge,bus=pci.0,addr=0x4,chassis_nr=1 -m 4096 -object memory-backend-file,mem-path=/var/ram_vm3,share=yes,size=4G,id=mem-mem1 -smp 4,maxcpus=4,cores=2,threads=1,sockets=2 -numa node,memdev=mem-mem1,nodeid=0 -cpu host -chardev socket,wait=off,server=on,id=chardev_serial0,path=/tmp/ttt -device spapr-vty,id=serial0,reg=0x30000000,chardev=chardev_serial0 -object rng-random,filename=/dev/urandom,id=passthrough-1i0XOugg -device virtio-rng-pci,id=virtio-rng-pci-UW43yazK,rng=passthrough-1i0XOugg,bus=pci.0,addr=0x5 -device qemu-xhci,id=usb1,bus=pci.0,addr=0x6 -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kar/vt_test_images/rhel870-ppc64le-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pci.0,addr=0x7 -chardev socket,id=char_virtiofs_fs3,path=/tmp/virtiofsd.sock -device vhost-user-fs-pci,id=vufs_virtiofs_fs3,chardev=char_virtiofs_fs3,tag=myfs3,queue-size=1024,bus=pci_bridge,addr=0x1 -vnc :2 -rtc base=utc,clock=host -boot menu=off,order=cdn,once=c,strict=off -enable-kvm -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x9 -monitor stdio

Actual results,
[root@localhost ~]#  mount -t virtiofs myfs3 /mnt/myfs3
 mount -t virtiofs myfs3 /mnt/myfs3
[  648.074252] SELinux: (dev virtiofs, type virtiofs) getxattr errno 111
mount: /mnt/myfs3: mount(2) system call failed: Connection refused.

Comment 18 Yanan Fu 2022-06-22 04:21:36 UTC
Hi David,

Seems the merge request is ready to merge with label BZ Approved and Reviewer Done.
https://gitlab.com/redhat/rhel/src/qemu-kvm/qemu-kvm/-/merge_requests/193

Could you please have check if it is ready to merge ?

This bz block our gating test, lead to some other bugs pending as gating fail.
Many thanks!

Best regards
Yanan Fu

Comment 20 Yanan Fu 2022-06-23 03:48:38 UTC
Hi Mirek,

Could you please help merge it[1] if everything is ok ?
[1]https://gitlab.com/redhat/rhel/src/qemu-kvm/qemu-kvm/-/merge_requests/193

Many Thanks!


Best regards
Yanan Fu

Comment 21 xiagao 2022-06-23 07:29:05 UTC
(In reply to Yanan Fu from comment #18)
> Hi David,
> 
> Seems the merge request is ready to merge with label BZ Approved and
> Reviewer Done.
> https://gitlab.com/redhat/rhel/src/qemu-kvm/qemu-kvm/-/merge_requests/193
> 
> Could you please have check if it is ready to merge ?
> 
> This bz block our gating test, lead to some other bugs pending as gating
> fail.
> Many thanks!
> 
> Best regards
> Yanan Fu

Hi Dave, COU(In reply to Yanan Fu from comment #18)
> Hi David,
> 
> Seems the merge request is ready to merge with label BZ Approved and
> Reviewer Done.
> https://gitlab.com/redhat/rhel/src/qemu-kvm/qemu-kvm/-/merge_requests/193
> 
> Could you please have check if it is ready to merge ?
> 
> This bz block our gating test, lead to some other bugs pending as gating
> fail.
> Many thanks!
> 
> Best regards
> Yanan Fu


Hi Dave, as Yanan mentioned above this bug will block qemu-kvm component gating test, could you have a check the progress?
Thanks in advance.
Xiaoling

Comment 22 Dr. David Alan Gilbert 2022-06-23 08:35:50 UTC
(In reply to xiagao from comment #21)
> (In reply to Yanan Fu from comment #18)
> > Hi David,
> > 
> > Seems the merge request is ready to merge with label BZ Approved and
> > Reviewer Done.
> > https://gitlab.com/redhat/rhel/src/qemu-kvm/qemu-kvm/-/merge_requests/193
> > 
> > Could you please have check if it is ready to merge ?
> > 
> > This bz block our gating test, lead to some other bugs pending as gating
> > fail.
> > Many thanks!
> > 
> > Best regards
> > Yanan Fu
> 
> Hi Dave, COU(In reply to Yanan Fu from comment #18)
> > Hi David,
> > 
> > Seems the merge request is ready to merge with label BZ Approved and
> > Reviewer Done.
> > https://gitlab.com/redhat/rhel/src/qemu-kvm/qemu-kvm/-/merge_requests/193
> > 
> > Could you please have check if it is ready to merge ?
> > 
> > This bz block our gating test, lead to some other bugs pending as gating
> > fail.
> > Many thanks!
> > 
> > Best regards
> > Yanan Fu
> 
> 
> Hi Dave, as Yanan mentioned above this bug will block qemu-kvm component
> gating test, could you have a check the progress?
> Thanks in advance.
> Xiaoling


I think it's just the maintainers handling the merge; moving needinfo to mrezanin

Comment 24 Yanan Fu 2022-06-24 02:33:42 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 27 xiagao 2022-06-27 08:26:46 UTC
Run a virtiofs test loop,the result looks good, verify this bug.

Test pkg:
kernel-4.18.0-402.el8.x86_64
edk2-ovmf-20220126gitbb1bba3d77-2.el8.noarch
qemu-kvm-6.2.0-16.module+el8.7.0+15743+c774064d.x86_64

Comment 30 errata-xmlrpc 2022-11-08 09:20:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7472


Note You need to log in before you can comment on or make changes to this bug.