Bug 1694164 - virtio-fs: host<->guest shared file system (qemu)
Summary: virtio-fs: host<->guest shared file system (qemu)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.0
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: 8.2
Assignee: Dr. David Alan Gilbert
QA Contact: Yongxue Hong
URL:
Whiteboard:
: 1519458 (view as bug list)
Depends On: 1694161
Blocks: 1519459 1613899 1663685 1694166 1741613 1741615 1742150 1801321
TreeView+ depends on / blocked
 
Reported: 2019-03-29 16:54 UTC by Stefan Hajnoczi
Modified: 2020-06-24 05:58 UTC (History)
28 users (show)

Fixed In Version: qemu-kvm-4.2.0-8.module+el8.2.0+5607+dc756904
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-05 09:45:09 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:
yhong: needinfo-
yhong: needinfo-


Attachments (Terms of Use)
pjdfstest.log (78.72 KB, text/plain)
2020-02-05 03:06 UTC, Yongxue Hong
no flags Details


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 176720 0 None None None 2019-08-06 12:39:55 UTC
Red Hat Product Errata RHBA-2020:2017 0 None None None 2020-05-05 09:47:03 UTC

Description Stefan Hajnoczi 2019-03-29 16:54:00 UTC
The virtio_fs.ko driver provides host<->guest shared file system support in Linux.  For more information, see the project website at https://virtio-fs.gitlab.io/.

virtio-fs is required in RHEL to enable the following use cases:
1. Sharing data between the host and guest.
2. Booting from a directory tree on the host without the need for disk images.
3. Filesystem-as-a-service for Ceph so that guests are isolated from the storage backend (no access to storage network or distributed storage system configuration details).
4. Access to container rootfs for Kata Containers.
5. Access to PersistentVolumes in KubeVirt.

Comment 1 Stefan Hajnoczi 2019-04-04 12:43:01 UTC
Hi Hanns-Joachim,
You have set the target release to 8.1.  This feature is unlikely to be ready for that release.

Would you like to share information about your requirements so they can be taken onboard for virtio-fs?

Thanks,
Stefan

Comment 2 Ademar Reis 2019-04-15 20:41:38 UTC
*** Bug 1519458 has been marked as a duplicate of this bug. ***

Comment 3 IBM Bug Proxy 2019-06-17 16:00:24 UTC
------- Comment From lagarcia.com 2019-06-17 11:58 EDT-------
(In reply to comment #6)
> Hi Hanns-Joachim,
> You have set the target release to 8.1.  This feature is unlikely to be
> ready for that release.
>
> Would you like to share information about your requirements so they can be
> taken onboard for virtio-fs?
>
> Thanks,
> Stefan

We have no specific requirements for RHEL 8.1. I reset the target release to RHEL 8.2.

Comment 4 IBM Bug Proxy 2019-08-24 18:10:18 UTC
------- Comment From fnovak.com 2019-08-24 14:00 EDT-------
karen had mentioned virtue-fs for Power in the context of Kata..
Not much of any detail on that..  so if there's any more info on the relationship, that would be good to get into this BZ.
Also, are there any concerns/issues wrt virtio-fs and Power?

Comment 5 Dr. David Alan Gilbert 2019-09-03 10:02:15 UTC
(In reply to IBM Bug Proxy from comment #4)
> ------- Comment From fnovak.com 2019-08-24 14:00 EDT-------
> karen had mentioned virtue-fs for Power in the context of Kata..
> Not much of any detail on that..  so if there's any more info on the
> relationship, that would be good to get into this BZ.
> Also, are there any concerns/issues wrt virtio-fs and Power?

None that we know of; but we've not tested it on power; it would be good to do
some testing to see if there are any surprises.

Comment 6 Dr. David Alan Gilbert 2019-10-17 16:34:40 UTC
qemu code merged; should land in 8.2 AV on rebase.
(Still need to upstream the daemon)

Comment 7 Luiz Capitulino 2019-10-18 15:24:05 UTC
That's wonderful to know, thank you very much David!

Just one question: where's the virtiofsd daemon going to live? In the qemu repo?

Comment 9 Danilo de Paula 2019-12-04 12:40:04 UTC
The patch series for this fix has been dropped from the queue.
So I'm also moving this BZ back to ASSIGNED.

Comment 10 IBM Bug Proxy 2019-12-05 17:20:33 UTC
------- Comment From muriloo.com 2019-12-05 12:13 EDT-------
*** This bug has been marked as a duplicate of bug 176720 ***

Comment 11 IBM Bug Proxy 2019-12-05 17:20:40 UTC
*** Bug 173810 has been marked as a duplicate of this bug. ***

Comment 12 Luiz Capitulino 2019-12-05 19:59:05 UTC
(In reply to Danilo Cesar de Paula from comment #9)
> The patch series for this fix has been dropped from the queue.
> So I'm also moving this BZ back to ASSIGNED.

Danilo, just to make sure I understand: the patchset that was dropped
was for virtiofsd, correct?

I'm asking because QEMU support for virtio-fs did make it in QEMU 4.2,
so I think we have two options for this BZ:

1. Keep it moved back to ASSIGNED until virtiofsd makes it (which is
   the current state); or

2. Set "Fixed in Version" to QEMU 4.2, which means QEMU support made it.
   Then open a new BZ to track the virtiofsd daemon

Stefan, Dave, what do you prefer?

Comment 14 Dr. David Alan Gilbert 2019-12-13 10:23:20 UTC
virtiofsd full patchset posted to qemu-devel:
https://lists.gnu.org/archive/html/qemu-devel/2019-12/msg02382.html

Comment 15 Dr. David Alan Gilbert 2019-12-13 18:25:04 UTC
We'll also want to backport Stefan's msi-x fix:
virtio-fs: fix MSI-X nvectors calculation which is now 366844f3d1329c6423dd752891a28ccb3ee8fddd in upstream qemu

Comment 16 Dr. David Alan Gilbert 2020-01-21 13:54:49 UTC
v2 patchset posted to qemu-devel https://lists.gnu.org/archive/html/qemu-devel/2020-01/msg04562.html

Comment 17 Dr. David Alan Gilbert 2020-01-24 11:20:35 UTC
Upstream merged:
   a43efa34c7d7b628cbf1ec0fe60043e5c91043ea  - Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20200123b' into staging

Will now start the downstreaming.

Comment 20 Yongxue Hong 2020-02-03 05:34:40 UTC
Verification steps:
The host kernel version:
[root@hp-dl388g8-17 virtio-fs-test]# uname -r
4.18.0-175.el8.x86_64

The guest kernel version:
[root@bootp-73-5-202 ~]# uname -r
4.18.0-167.el8.x86_64

The qemu version:
[root@hp-dl388g8-17 virtio-fs-test]# /usr/libexec/qemu-kvm --version
QEMU emulator version 4.2.0 (qemu-kvm-4.2.0-8.module+el8.2.0+5607+dc756904)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

The virtiofsd version:
[root@hp-dl388g8-17 virtio-fs-test]# /usr/libexec/virtiofsd -V
using FUSE kernel interface version 7.31

1. Create a shared directory on the host:
[root@hp-dl388g8-17 virtio-fs-test]# mkdir /tmp/virtiofs_test

2. Start a virtiofsd deamon:
[root@hp-dl388g8-17 virtio-fs-test]# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o source=/tmp/virtiofs_test -o cache=always

3. Boot a guest to connect the socket file from virtiofsd:
[root@hp-dl388g8-17 virtio-fs-test]#
/usr/libexec/qemu-kvm \
-name 'avocado-vt-vm1'  \
-sandbox on  \
-machine q35  \
-nodefaults \
-device VGA,bus=pcie.0,addr=0x1 \
-m 8G  \
-smp 12,maxcpus=12,cores=6,threads=1,dies=1,sockets=2  \
-object memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
-numa node,memdev=mem \
-chardev socket,id=char0,path=/tmp/vhostqemu \
-device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
-device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs,bus=pcie_extra_root_port_0,addr=0x0 \
-device pvpanic,ioport=0x505,id=id2kYB0k \
-chardev socket,id=chardev_serial0,path=/var/tmp/serial-serial0,server,nowait \
-device isa-serial,id=serial0,chardev=chardev_serial0  \
-chardev socket,id=seabioslog_id_20200202-031603-bO5mVnaE,path=/var/tmp/seabios,server,nowait \
-device isa-debugcon,chardev=seabioslog_id_20200202-031603-bO5mVnaE,iobase=0x402 \
-device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
-device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
-device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-3,addr=0x0 \
-blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/yhong/rhel820-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
-blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
-device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
-device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
-device virtio-net-pci,mac=9a:5a:6d:98:33:1b,id=id8WQbD1,netdev=idEDNgoB,bus=pcie.0-root-port-4,addr=0x0  \
-netdev tap,id=idEDNgoB,vhost=on \
-device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
-vnc :0  \
-rtc base=utc,clock=host,driftfix=slew  \
-boot menu=off,order=cdn,once=c,strict=off \
-enable-kvm \
-monitor stdio \

4. Log into guest then mount the virtiofs:
[root@bootp-73-5-202 ~]# mount -t virtiofs myfs /mnt

5. Generate a file on the mountpoint inside guest:
[root@bootp-73-5-202 ~]# dd if=/dev/zero of=/mnt/dd_file bs=1M count=1024 oflag=direct
[root@bootp-73-5-202 ~]# md5sum /mnt/dd_file 
cd573cfaace07e7949bc0c46028904ff  /mnt/dd_file

6. Check the file whether is also in the shared directory on the host:
[root@hp-dl388g8-17 ~]# ls -lh /tmp/virtiofs_test/
total 1.0G
-rw-r--r--. 1 root root 1.0G Feb  3 00:17 dd_file
[root@hp-dl388g8-17 ~]# md5sum /tmp/virtiofs_test/dd_file 
cd573cfaace07e7949bc0c46028904ff  /tmp/virtiofs_test/dd_file

7. Generate a file in the shared directory on the host.
[root@hp-dl388g8-17 ~]# dd if=/dev/zero of=/tmp/virtiofs_test/dd_file2 bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 12.4057 s, 173 MB/s
[root@hp-dl388g8-17 ~]# md5sum /tmp/virtiofs_test/dd_file2
a981130cf2b7e09f4686dc273cf7187e  /tmp/virtiofs_test/dd_file2

8. Check the file whether is also on the mountpoint inside the guest:
[root@bootp-73-5-202 ~]# ls -lh /mnt/dd_file2
-rw-r--r--. 1 root root 2.0G Feb  3 13:26 /mnt/dd_file2
[root@bootp-73-5-202 ~]# md5sum /mnt/dd_file2
a981130cf2b7e09f4686dc273cf7187e  /mnt/dd_file2

Can share the data between the host and the guest successfully.

Comment 21 Yongxue Hong 2020-02-03 05:43:08 UTC
Hi, David
For comment 20, is the verification sufficient? did I miss some points?

Thanks.

Comment 22 Dr. David Alan Gilbert 2020-02-03 09:56:14 UTC
(In reply to Yongxue Hong from comment #21)
> Hi, David
> For comment 20, is the verification sufficient? did I miss some points?
> 
> Thanks.

That's OK as a basic smoke test.
I think there are plans to do a proper test with lots of different filesystem accesses.

Comment 23 Yongxue Hong 2020-02-04 02:40:57 UTC
(In reply to Dr. David Alan Gilbert from comment #22)
> (In reply to Yongxue Hong from comment #21)
> > Hi, David
> > For comment 20, is the verification sufficient? did I miss some points?
> > 
> > Thanks.
> 
> That's OK as a basic smoke test.
So may I change the bug's status to VERIFIED?
> I think there are plans to do a proper test with lots of different
> filesystem accesses.
or I need to verify it by different filesystem accesses then change the status to VERIFIED?

Thanks.

Comment 24 Yongxue Hong 2020-02-04 03:07:16 UTC
(In reply to Dr. David Alan Gilbert from comment #22)
> (In reply to Yongxue Hong from comment #21)
> > Hi, David
> > For comment 20, is the verification sufficient? did I miss some points?
> > 
> > Thanks.
> 
> That's OK as a basic smoke test.
> I think there are plans to do a proper test with lots of different
> filesystem accesses.

We have a testing plan to cover the other function points of virtio-fs.

Comment 25 Yongxue Hong 2020-02-04 03:08:40 UTC
Hi, Zhenyu
Please help me to verify it on the power side.

Thanks.

Comment 26 Zhenyu Zhang 2020-02-04 03:32:01 UTC
(In reply to Yongxue Hong from comment #25)
> Hi, Zhenyu
> Please help me to verify it on the power side.
> 
> Thanks.

ok, testing, results updated later.

Comment 27 Zhenyu Zhang 2020-02-04 06:58:50 UTC
Verification steps:
The host kernel version:
[root@netqe-p9-01 ~]# uname -r 
4.18.0-175.el8.ppc64le

The guest kernel version:
[root@dhcp19-15-201 ~]# uname -r
4.18.0-167.el8.ppc64le

The qemu version:
[root@netqe-p9-01 ~]# /usr/libexec/qemu-kvm -version
QEMU emulator version 4.2.0 (qemu-kvm-4.2.0-8.module+el8.2.0+5607+dc756904)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

[root@netqe-p9-01 ~]# /usr/libexec/virtiofsd -V
using FUSE kernel interface version 7.31

1. Create a shared directory on the host:
[root@netqe-p9-01 ceph]# mkdir /tmp/virtiofs_test
[root@netqe-p9-01 ~]# df -h /tmp/virtiofs_test/
Filesystem                           Size  Used Avail Use% Mounted on
/dev/mapper/rhel_netqe--p9--01-root   50G  6.6G   44G  14% /

2. Start a virtiofsd deamon:
[root@netqe-p9-01 ceph]# /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o source=/tmp/virtiofs_test -o cache=always

3. Boot a guest to connect the socket file from virtiofsd:
[root@netqe-p9-01 vt_test_images]# /usr/libexec/qemu-kvm \
> -name 'avocado-vt-vm1'  \
> -sandbox on  \
> -machine pseries  \
> -nodefaults \
> -device VGA,bus=pci.0,addr=0x1 \
> -m 8192  \
> -smp 12,maxcpus=12,cores=6,threads=1,dies=1,sockets=2  \
> -object memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
> -numa node,memdev=mem \
> -chardev socket,id=char0,path=/tmp/vhostqemu \
> -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs,bus=pci.0,addr=0x0 \
> -cpu 'host'  \
> -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/fXeDnxMk,server,nowait \
> -mon chardev=qmp_id_qmpmonitor1,mode=control  \
> -chardev socket,path=/var/tmp/XeDnxMk,server,nowait,id=chardev_serial0 \
> -device spapr-vty,id=serial0,reg=0x30000000,chardev=chardev_serial0 \
> -device qemu-xhci,id=usb1,bus=pci.0,addr=0x6 \
> -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x7 \
> -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/zhenyzha/rhel820-ppc64le-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
> -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
> -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
> -device virtio-net-pci,mac=9a:4b:8b:2b:08:aa,id=id4gTBXz,netdev=idBUKRzj,bus=pci.0,addr=0x8  \
> -netdev tap,id=idBUKRzj,vhost=on \
> -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
> -vnc :30  \
> -rtc base=utc,clock=host,driftfix=slew  \
> -boot menu=off,order=cdn,once=c,strict=off \
> -enable-kvm \
> -monitor stdio
QEMU 4.2.0 monitor - type 'help' for more information
(qemu) qemu-kvm: warning: global mc146818rtc.lost_tick_policy has invalid class name
qemu-kvm: warning: kernel_irqchip allowed but unavailable: IRQ_XIVE capability must be present for KVM
Falling back to kernel-irqchip=off

4. Log into guest then mount the virtiofs:
[root@dhcp19-15-201 ~]# mount -t virtiofs myfs /mnt
[root@dhcp19-15-201 ~]# df -h
df -h
Filesystem                             Size  Used Avail Use% Mounted on
devtmpfs                               3.7G     0  3.7G   0% /dev
tmpfs                                  3.8G     0  3.8G   0% /dev/shm
tmpfs                                  3.8G   24M  3.8G   1% /run
tmpfs                                  3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/rhel_dhcp19--129--23-root   17G  4.5G   13G  27% /
/dev/sda2                             1014M  220M  795M  22% /boot
tmpfs                                  763M  6.3M  757M   1% /run/user/0
myfs                                    50G  3.6G   47G   8% /mnt      

5. Generate a file on the mountpoint inside guest:
[root@dhcp19-15-201 ~]# dd if=/dev/zero of=/mnt/dd_file bs=1M count=1024 oflag=direct
[root@dhcp19-15-201 ~]# md5sum /mnt/dd_file 
cd573cfaace07e7949bc0c46028904ff  /mnt/dd_file

6. Check the file whether is also in the shared directory on the host:
[root@netqe-p9-01 ~]# ls -lh /tmp/virtiofs_test/
total 1.0G
-rw-r--r-- 1 root root 1.0G Feb  4 01:16 dd_file
[root@netqe-p9-01 ~]#  md5sum /tmp/virtiofs_test/dd_file 
cd573cfaace07e7949bc0c46028904ff  /tmp/virtiofs_test/dd_file

7. Generate a file in the shared directory on the host.
[root@netqe-p9-01 ~]# dd if=/dev/zero of=/tmp/virtiofs_test/dd_file2 bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 36.4934 s, 58.8 MB/s
[root@netqe-p9-01 ~]# md5sum /tmp/virtiofs_test/dd_file2
a981130cf2b7e09f4686dc273cf7187e  /tmp/virtiofs_test/dd_file2

8. Check the file whether is also on the mountpoint inside the guest:
[root@dhcp19-15-201 ~]# ls -lh /mnt/dd_file2
ls -lh /mnt/dd_file2
-rw-r--r--. 1 root root 2.0G Feb  4 14:18 /mnt/dd_file2
[root@dhcp19-15-201 ~]# md5sum /mnt/dd_file2
md5sum /mnt/dd_file2
a981130cf2b7e09f4686dc273cf7187e  /mnt/dd_file2

Power9 can share the data between the host and the guest successfully.

Comment 28 IBM Bug Proxy 2020-02-04 08:20:23 UTC
------- Comment From KURZGREG.com 2020-02-04 03:14 EDT-------
(In reply to comment #27)

FWIW, I use Tuxera's "POSIX filesystem test suite" with VirtFS (9p), available at:

https://www.tuxera.com/community/posix-test-suite/

This provides a fair amount of coverage for a variety of filesystem operations.

Comment 29 Dr. David Alan Gilbert 2020-02-04 09:06:29 UTC
(In reply to Yongxue Hong from comment #23)
> (In reply to Dr. David Alan Gilbert from comment #22)
> > (In reply to Yongxue Hong from comment #21)
> > > Hi, David
> > > For comment 20, is the verification sufficient? did I miss some points?
> > > 
> > > Thanks.
> > 
> > That's OK as a basic smoke test.
> So may I change the bug's status to VERIFIED?
> > I think there are plans to do a proper test with lots of different
> > filesystem accesses.
> or I need to verify it by different filesystem accesses then change the
> status to VERIFIED?
> 
> Thanks.

Yes, as long as you have a test plan for a more extensive test that's fine.

Comment 30 Vivek Goyal 2020-02-04 12:59:08 UTC
(In reply to IBM Bug Proxy from comment #28)
> ------- Comment From KURZGREG.com 2020-02-04 03:14 EDT-------
> (In reply to comment #27)
> 
> FWIW, I use Tuxera's "POSIX filesystem test suite" with VirtFS (9p),
> available at:
> 
> https://www.tuxera.com/community/posix-test-suite/
> 
> This provides a fair amount of coverage for a variety of filesystem
> operations.

Is it same as this.

https://github.com/pjd/pjdfstest

I generally clone this git tree and run pjdfstests.

Comment 31 Yongxue Hong 2020-02-05 01:07:40 UTC
(In reply to Vivek Goyal from comment #30)
> (In reply to IBM Bug Proxy from comment #28)
> > ------- Comment From KURZGREG.com 2020-02-04 03:14 EDT-------
> > (In reply to comment #27)
> > 
> > FWIW, I use Tuxera's "POSIX filesystem test suite" with VirtFS (9p),
> > available at:
> > 
> > https://www.tuxera.com/community/posix-test-suite/
> > 
> > This provides a fair amount of coverage for a variety of filesystem
> > operations.
> 
> Is it same as this.
> 
> https://github.com/pjd/pjdfstest
> 
> I generally clone this git tree and run pjdfstests.

We will add a case to cover it by running pjdfstests.
Thanks.

Comment 32 Yongxue Hong 2020-02-05 01:09:35 UTC
From comment 20 and comment 27, This bug is fixed.
So change the status to VERIFIED.

Thanks.

Comment 33 Yongxue Hong 2020-02-05 03:04:22 UTC
(In reply to Vivek Goyal from comment #30)
> (In reply to IBM Bug Proxy from comment #28)
> > ------- Comment From KURZGREG.com 2020-02-04 03:14 EDT-------
> > (In reply to comment #27)
> > 
> > FWIW, I use Tuxera's "POSIX filesystem test suite" with VirtFS (9p),
> > available at:
> > 
> > https://www.tuxera.com/community/posix-test-suite/
> > 
> > This provides a fair amount of coverage for a variety of filesystem
> > operations.
> 
> Is it same as this.
> 
> https://github.com/pjd/pjdfstest
> 
> I generally clone this git tree and run pjdfstests.

I tried to run pjdfstests on the mountpoint which the type is virtio-fs,
failed to unlink.
the part of log:
/mnt/pjdfstest/tests/symlink/03.t .......... 
1..6
not ok 1 - tried 'symlink 3e9ddde43f3e1e919bf3c4eb23d56402726da69c7c44ca598726a8d7ec688dfcecd230f39fc9715bff7d17b5ba19f57724884089b10b66d512c03cf0befaa35/8d60c371b79.../1c7168cea42b6fe6c98c1a02e7468e41c818c9c476997695500e36bdc395407f88336153cfb325c641edea5b2d3da3e54c24c83eaf1dd52734263db14f1f136 pjdfstest_726e3c7e5a516bcdaad425773b43f7e7', expected 0, got ENAMETOOLONG
not ok 2 - tried 'unlink pjdfstest_726e3c7e5a516bcdaad425773b43f7e7', expected 0, got ENOENT

Did you hit the same issue?

The detail log, please check the attachment.

Comment 34 Yongxue Hong 2020-02-05 03:06:47 UTC
Created attachment 1657730 [details]
pjdfstest.log

Comment 35 Vivek Goyal 2020-02-05 12:41:22 UTC
(In reply to Yongxue Hong from comment #33)
> (In reply to Vivek Goyal from comment #30)
> > (In reply to IBM Bug Proxy from comment #28)
> > > ------- Comment From KURZGREG.com 2020-02-04 03:14 EDT-------
> > > (In reply to comment #27)
> > > 
> > > FWIW, I use Tuxera's "POSIX filesystem test suite" with VirtFS (9p),
> > > available at:
> > > 
> > > https://www.tuxera.com/community/posix-test-suite/
> > > 
> > > This provides a fair amount of coverage for a variety of filesystem
> > > operations.
> > 
> > Is it same as this.
> > 
> > https://github.com/pjd/pjdfstest
> > 
> > I generally clone this git tree and run pjdfstests.
> 
> I tried to run pjdfstests on the mountpoint which the type is virtio-fs,
> failed to unlink.
> the part of log:
> /mnt/pjdfstest/tests/symlink/03.t .......... 
> 1..6
> not ok 1 - tried 'symlink
> 3e9ddde43f3e1e919bf3c4eb23d56402726da69c7c44ca598726a8d7ec688dfcecd230f39fc97
> 15bff7d17b5ba19f57724884089b10b66d512c03cf0befaa35/8d60c371b79.../
> 1c7168cea42b6fe6c98c1a02e7468e41c818c9c476997695500e36bdc395407f88336153cfb32
> 5c641edea5b2d3da3e54c24c83eaf1dd52734263db14f1f136
> pjdfstest_726e3c7e5a516bcdaad425773b43f7e7', expected 0, got ENAMETOOLONG
> not ok 2 - tried 'unlink pjdfstest_726e3c7e5a516bcdaad425773b43f7e7',
> expected 0, got ENOENT
> 
> Did you hit the same issue?
> 
> The detail log, please check the attachment.

I have seen this on xfs. If you use ext4 as filesystem on host, I think this test case passes. For some
reason filename is too long for xfs (off by one). I have never dived deeper to do the analysis. I would say
open a bug for this and I will spend more time debugging this.

Comment 36 Yongxue Hong 2020-02-05 14:21:39 UTC
(In reply to Vivek Goyal from comment #35)
> (In reply to Yongxue Hong from comment #33)
> > (In reply to Vivek Goyal from comment #30)
> > > (In reply to IBM Bug Proxy from comment #28)
> > > > ------- Comment From KURZGREG.com 2020-02-04 03:14 EDT-------
> > > > (In reply to comment #27)
> > > > 
> > > > FWIW, I use Tuxera's "POSIX filesystem test suite" with VirtFS (9p),
> > > > available at:
> > > > 
> > > > https://www.tuxera.com/community/posix-test-suite/
> > > > 
> > > > This provides a fair amount of coverage for a variety of filesystem
> > > > operations.
> > > 
> > > Is it same as this.
> > > 
> > > https://github.com/pjd/pjdfstest
> > > 
> > > I generally clone this git tree and run pjdfstests.
> > 
> > I tried to run pjdfstests on the mountpoint which the type is virtio-fs,
> > failed to unlink.
> > the part of log:
> > /mnt/pjdfstest/tests/symlink/03.t .......... 
> > 1..6
> > not ok 1 - tried 'symlink
> > 3e9ddde43f3e1e919bf3c4eb23d56402726da69c7c44ca598726a8d7ec688dfcecd230f39fc97
> > 15bff7d17b5ba19f57724884089b10b66d512c03cf0befaa35/8d60c371b79.../
> > 1c7168cea42b6fe6c98c1a02e7468e41c818c9c476997695500e36bdc395407f88336153cfb32
> > 5c641edea5b2d3da3e54c24c83eaf1dd52734263db14f1f136
> > pjdfstest_726e3c7e5a516bcdaad425773b43f7e7', expected 0, got ENAMETOOLONG
> > not ok 2 - tried 'unlink pjdfstest_726e3c7e5a516bcdaad425773b43f7e7',
> > expected 0, got ENOENT
> > 
> > Did you hit the same issue?
> > 
> > The detail log, please check the attachment.
> 
> I have seen this on xfs. If you use ext4 as filesystem on host, I think this
> test case passes. For some
> reason filename is too long for xfs (off by one). I have never dived deeper
> to do the analysis. I would say
> open a bug for this and I will spend more time debugging this.

Okay, get it, thanks.

Comment 37 Ademar Reis 2020-02-05 22:56:06 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 38 Zhenyu Zhang 2020-02-17 10:11:20 UTC
on the arm arch PASS

Verification steps:
The host kernel version:
[root@hpe-moonshot-02-c05 ~]# uname -r
4.18.0-176.el8.aarch64

The guest kernel version:
[root@dhcp16-203-178 ~]# uname -r
4.18.0-175.el8.aarch64

The qemu version:
[root@hpe-moonshot-02-c05 ~]# /usr/libexec/qemu-kvm -version
QEMU emulator version 4.2.0 (qemu-kvm-4.2.0-8.module+el8.2.0+5607+dc756904)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

The virtiofsd version:
[root@hpe-moonshot-02-c05 ~]# /usr/libexec/virtiofsd -V
using FUSE kernel interface version 7.31

1. Create a shared directory on the host:
[root@hpe-moonshot-02-c05 test]# mkdir /tmp/virtiofs_test
[root@hpe-moonshot-02-c05 test]#  df -h /tmp/virtiofs_test/
Filesystem                                    Size  Used Avail Use% Mounted on
/dev/mapper/rhel_hpe--moonshot--02--c05-root   50G  3.4G   47G   7% /

2. Start a virtiofsd deamon:
[root@hpe-moonshot-02-c05 test]#  /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o source=/tmp/virtiofs_test -o cache=always

3. Boot a guest to connect the socket file from virtiofsd:
[root@hpe-moonshot-02-c05 ~]# /usr/libexec/qemu-kvm \
-name 'avocado-vt-vm1'  \
-sandbox on  \
-drive file=/usr/share/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on \
-drive file=/home/kar/workspace/var/lib/avocado/data/avocado-vt/images/rhel820-aarch64-virtio-scsi_AAVMF_VARS.fd,if=pflash,format=raw,unit=1 \
-machine virt,gic-version=host  \
-nodefaults  \
-vga none \
-m 8192  \
-smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
-object memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
-numa node,memdev=mem \
-chardev socket,id=char0,path=/tmp/vhostqemu \
-device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
-device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs,bus=pcie_extra_root_port_0,addr=0x0 \
-cpu 'host'  \
-chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/auPh7W,server,nowait \
-mon chardev=qmp_id_qmpmonitor1,mode=control  \
-chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/avoSnuPh7W,server,nowait \
-mon chardev=qmp_id_catch_monitor,mode=control  \
-serial unix:'/var/tmp/a7W',server,nowait \
-device pcie-root-port,id=pcie.0-root-port-1,slot=1,chassis=1,addr=0x1,bus=pcie.0 \
-device qemu-xhci,id=usb1,bus=pcie.0-root-port-1,addr=0x0 \
-device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-2,addr=0x0 \
-blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/test/os.qcow2,cache.direct=on,cache.no-flush=off \
-blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
-device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
-device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
-device virtio-net-pci,mac=9a:76:cc:45:8a:d5,rombar=0,id=idoMGVme,netdev=idZNOtAe,bus=pcie.0-root-port-3,addr=0x0  \
-netdev tap,id=idZNOtAe,vhost=on \
-device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
-nographic  \
-rtc base=utc,clock=host,driftfix=slew \
-enable-kvm \
-monitor stdio 
QEMU 4.2.0 monitor - type 'help' for more information
(qemu) qemu-kvm: warning: global mc146818rtc.lost_tick_policy has invalid class name

(qemu) 

4. Log into guest then mount the virtiofs:
[root@dhcp16-203-178 ~]# mount -t virtiofs myfs /mnt
[root@dhcp16-203-178 ~]# df -h
Filesystem                              Size  Used Avail Use% Mounted on
devtmpfs                                3.7G     0  3.7G   0% /dev
tmpfs                                   3.8G     0  3.8G   0% /dev/shm
tmpfs                                   3.8G   32M  3.7G   1% /run
tmpfs                                   3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/rhel_dhcp16--203--152-root   17G  4.4G   13G  27% /
/dev/sda2                              1014M  262M  753M  26% /boot
/dev/sda1                               599M  6.4M  593M   2% /boot/efi
tmpfs                                   763M  128K  763M   1% /run/user/0
myfs                                     50G  3.4G   47G   7% /mnt

5. Generate a file on the mountpoint inside guest:
[root@dhcp16-203-178 ~]# dd if=/dev/zero of=/mnt/dd_file bs=1M count=1024 oflag=direct
[root@dhcp16-203-178 ~]# md5sum /mnt/dd_file 
cd573cfaace07e7949bc0c46028904ff  /mnt/dd_file

6. Check the file whether is also in the shared directory on the host:
[root@hpe-moonshot-02-c05 ~]#  ls -lh /tmp/virtiofs_test/
total 1.0G
-rw-r--r--. 1 root root 1.0G Feb 17 02:36 dd_file
[root@hpe-moonshot-02-c05 ~]#  md5sum /tmp/virtiofs_test/dd_file 
cd573cfaace07e7949bc0c46028904ff  /tmp/virtiofs_test/dd_file

7. Generate a file in the shared directory on the host.
[root@hpe-moonshot-02-c05 ~]# dd if=/dev/zero of=/tmp/virtiofs_test/dd_file2 bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 19.6633 s, 109 MB/s
[root@hpe-moonshot-02-c05 ~]#  md5sum /tmp/virtiofs_test/dd_file2
a981130cf2b7e09f4686dc273cf7187e  /tmp/virtiofs_test/dd_file2

8. Check the file whether is also on the mountpoint inside the guest:
[root@dhcp16-203-178 ~]# ls -lh /mnt/dd_file2
-rw-r--r--. 1 root root 2.0G Feb 17 15:38 /mnt/dd_file2
[root@dhcp16-203-178 ~]# md5sum /mnt/dd_file2
a981130cf2b7e09f4686dc273cf7187e  /mnt/dd_file2

Can share the data between the host and the guest successfully.

Comment 39 IBM Bug Proxy 2020-03-13 15:42:00 UTC
------- Comment From hannsj_uhl.com 2020-03-13 11:33 EDT-------
Comment from  Leonardo Augusto Guimaraes Garcia 2020-03-13 10:18:49 CDT

This bug is not related to Power. It has been opened for the generic support of virtio-fs in Red Hat products. Power is broken, and we don't have the patches to fix it accepted upstream yet.
I.e. for IBM Power this RHEL8.2 feature request is not applicable. Thanks.

Comment 40 Dr. David Alan Gilbert 2020-03-13 15:48:22 UTC
(In reply to IBM Bug Proxy from comment #39)
> ------- Comment From hannsj_uhl.com 2020-03-13 11:33 EDT-------
> Comment from  Leonardo Augusto Guimaraes Garcia 2020-03-13 10:18:49 CDT
> 
> This bug is not related to Power. It has been opened for the generic support
> of virtio-fs in Red Hat products. Power is broken, and we don't have the
> patches to fix it accepted upstream yet.
> I.e. for IBM Power this RHEL8.2 feature request is not applicable. Thanks.

Leonardo: Can you tell me what is broken about this on Power? I'm not aware of any upstream fixes for it waiting.

Comment 41 Leonardo Garcia 2020-03-30 19:45:13 UTC
(In reply to Dr. David Alan Gilbert from comment #40)
> Leonardo: Can you tell me what is broken about this on Power? I'm not aware
> of any upstream fixes for it waiting.

Sorry for the confusion (it was a miscommunication inside IBM). Virtio-fs is working fine on Power as well.

Comment 42 Dr. David Alan Gilbert 2020-03-31 08:36:07 UTC
(In reply to Leonardo Garcia from comment #41)
> (In reply to Dr. David Alan Gilbert from comment #40)
> > Leonardo: Can you tell me what is broken about this on Power? I'm not aware
> > of any upstream fixes for it waiting.
> 
> Sorry for the confusion (it was a miscommunication inside IBM). Virtio-fs is
> working fine on Power as well.

Great, thanks for letting me know.

Comment 44 errata-xmlrpc 2020-05-05 09:45:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.