RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2018885 - [virtiofs] virtiofsd debug log's timestamp is NULL
Summary: [virtiofs] virtiofsd debug log's timestamp is NULL
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: qemu-kvm
Version: 8.6
Hardware: x86_64
OS: Unspecified
urgent
high
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: xiagao
URL:
Whiteboard:
Depends On:
Blocks: 2130296 2132391
TreeView+ depends on / blocked
 
Reported: 2021-11-01 07:24 UTC by xiagao
Modified: 2023-05-16 08:54 UTC (History)
11 users (show)

Fixed In Version: qemu-kvm-6.2.0-22.module+el8.8.0+16816+1d3555ec
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2130296 2132391 (view as bug list)
Environment:
Last Closed: 2023-05-16 08:16:30 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Patch to fix this issue (1.21 KB, patch)
2022-08-16 22:31 UTC, Yusuke Okada (fujitsu)
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/rhel/src/qemu-kvm qemu-kvm merge_requests 222 0 None None None 2022-09-27 15:48:34 UTC
Red Hat Issue Tracker RHELPLAN-101197 0 None None None 2022-10-03 08:34:18 UTC
Red Hat Product Errata RHSA-2023:2757 0 None None None 2023-05-16 08:18:13 UTC

Description xiagao 2021-11-01 07:24:29 UTC
Description of problem:
start up virtiofsd with '-d' option, the log format is like below.
[root@dell-per440-02 kar]# /usr/libexec/virtiofsd --socket-path=/tmp/sock -o source=/home/test -d
[(null)] [ID: 00009695] virtio_session_mount: Waiting for vhost-user socket connection...
[(null)] [ID: 00009695] virtio_session_mount: Received vhost-user socket connection
[(null)] [ID: 00000001] virtio_loop: Entry
[(null)] [ID: 00000001] virtio_loop: Waiting for VU event
[(null)] [ID: 00000001] virtio_loop: Got VU event
[(null)] [ID: 00000001] virtio_loop: Waiting for VU event
[(null)] [ID: 00000001] virtio_loop: Got VU event
[(null)] [ID: 00000001] virtio_loop: Waiting for VU event
[(null)] [ID: 00000001] virtio_loop: Got VU event
[(null)] [ID: 00000001] virtio_loop: Waiting for VU event
[(null)] [ID: 00000001] virtio_loop: Got VU event
[(null)] [ID: 00000001] virtio_loop: Waiting for VU event
[(null)] [ID: 00000001] virtio_loop: Got VU event


Version-Release number of selected component (if applicable):
qemu-kvm-6.1.0-3.module+el8.6.0+12982+5e169f40.x86_64


How reproducible:
100%



Expected results:
[root@dell-per440-06 home]#  /usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu1 -o source=/home/test,cache=auto -d
[2021-11-01 07:14:17.051363+0000] [ID: 00309763] virtio_session_mount: Waiting for vhost-user socket connection...



Additional info:
qemu-kvm-6.1.0-6.el9.x86_64 + qemu-virtiofsd-6.1.0-6.el9.x86_64 works.
qemu-kvm-6.0.0-30.module+el8.5.0+12586+476da3e1.x86_64 works.

Comment 1 Klaus Heinrich Kiwi 2021-11-01 11:00:06 UTC
(In reply to xiagao from comment #0)

> 
> Additional info:
> qemu-kvm-6.1.0-6.el9.x86_64 + qemu-virtiofsd-6.1.0-6.el9.x86_64 works.
> qemu-kvm-6.0.0-30.module+el8.5.0+12586+476da3e1.x86_64 works.

Sergio, can you take a look?

 Since it's a regression, we might want to fix it - but perhaps also a good entry point to check if the rust daemon is also affected?

 -Klaus

Comment 2 xiagao 2022-03-07 04:14:05 UTC
Test on: virtiofsd-1.1.0-3.el9.x86_64
# /usr/libexec/virtiofsd --socket-path=/tmp/sock1 -o source=/home/test -o cache=none
[2022-03-07T04:10:01Z INFO  virtiofsd] Waiting for vhost-user socket connection...


Test on: qemu-kvm-6.2.0-8.module+el8.6.0+14324+050a5215.x86_64
# /usr/libexec/virtiofsd --socket-path=/tmp/sock1 -o source=/home/test -o cache=none --debug
virtio_session_mount: Waiting for vhost-user socket connection...

For rhel860, we should also have time stamp and debug type.

Comment 3 xiagao 2022-03-18 07:42:58 UTC
Yiding hi,

Would you like to have this bz fixed on the next release from Fujisu perspective, if yes, could you change the priority?

Thanks.
Xiaoling

Comment 4 Yiding Liu (Fujitsu) 2022-03-18 07:58:05 UTC
Hi, 

(In reply to xiagao from comment #3)
> Yiding hi,
> 
> Would you like to have this bz fixed on the next release from Fujisu
> perspective, if yes, could you change the priority?

I'd like to have this bz fixed on the next release but I don't have permission to set the priority.

> 
> Thanks.
> Xiaoling

Comment 5 xiagao 2022-03-18 09:45:01 UTC
(In reply to Yiding Liu (Fujitsu) from comment #4)
> Hi, 
> 
> (In reply to xiagao from comment #3)
> > Yiding hi,
> > 
> > Would you like to have this bz fixed on the next release from Fujisu
> > perspective, if yes, could you change the priority?
> 
> I'd like to have this bz fixed on the next release but I don't have
> permission to set the priority.

I set the priority to high.

> 
> > 
> > Thanks.
> > Xiaoling

Comment 7 xiagao 2022-05-10 07:15:02 UTC
RHEL8.7.0 also/still has this issue.

Comment 9 Sergio Lopez 2022-05-16 14:53:49 UTC
Hi,

This BZ is just for RHEL 8.x, isn't it? We're not shipping the Rust version of virtiofsd on 8.x, so I guess we don't really need to do anything here?

Thanks,
Sergio.

Comment 11 John Ferlan 2022-05-17 14:25:52 UTC
Given comment 9, why is this necessary for RHEL8?  

In RHEL9, virtiofsd utilizes a new Rust based daemon; whereas, RHEL8 utilizes a C based daemon for which there are no plans to augment. Having a NULL date/time stamp in the log does not seem to be a sufficient reason to alter essentially frozen code. This probably should just be closed WONTFIX.

Comment 12 xiagao 2022-05-18 03:24:18 UTC
(In reply to John Ferlan from comment #11)
> Given comment 9, why is this necessary for RHEL8?  
> 
> In RHEL9, virtiofsd utilizes a new Rust based daemon; whereas, RHEL8
> utilizes a C based daemon for which there are no plans to augment. Having a
> NULL date/time stamp in the log does not seem to be a sufficient reason to
> alter essentially frozen code. This probably should just be closed WONTFIX.

As Fujisu partner want it to fix according to comment4.
hshuai Could you have a double confirm? Thanks.

Comment 13 Hu Shuai (Fujitsu) 2022-05-19 01:34:33 UTC
Hi,

We think this bug should be fixed.
It is a problem for supportability. Timestamp is very important information for trouble investigations.

Thanks!
Hu Shuai

Comment 14 John Ferlan 2022-08-09 13:37:53 UTC
There are no plans to resolve non critical virtiofs issues for RHEL8, closing as WONTFIX.

This was not a regression as the code never generated the timestamp for the virtiofsd debug logging.

Comment 15 Yusuke Okada (fujitsu) 2022-08-16 22:31:33 UTC
Created attachment 1905877 [details]
Patch to fix this issue

Comment 16 Yusuke Okada (fujitsu) 2022-08-16 22:35:12 UTC
Hi,

I think this issue is caused by the following line.

https://gitlab.com/qemu-project/qemu/-/blob/master/tools/virtiofsd/passthrough_ll.c#L4200

g_date_time_format doesn't accept "%f" specifier. Because of this, this function returns NULL.
To get microsecond, we have to call g_date_time_get_microsecond.
This issue can be resolved with the patch in Comment#15.
After applying the patch, I got the following logs. It seems OK.

```
[root@pq7-0 build]# ./tools/virtiofsd/virtiofsd  --socket-path=/tmp/sock -o source=/home/test -d
[2022-08-16 22:17:25.587433+0000] [ID: 01401513] virtio_session_mount: Waiting for vhost-user socket connection...
[2022-08-16 22:17:37.366991+0000] [ID: 01401513] virtio_session_mount: Received vhost-user socket connection
[2022-08-16 22:17:37.395523+0000] [ID: 00000001] virtio_loop: Entry
[2022-08-16 22:17:37.395600+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.417579+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.417794+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.417815+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.417830+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.417844+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.417854+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.417863+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.417875+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.417889+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.417902+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.417937+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.418061+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.418078+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.418088+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.418096+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.418108+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.418124+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.418141+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:37.418150+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-08-16 22:17:37.418160+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-08-16 22:17:40.434697+0000] [ID: 00000001] virtio_loop: Unexpected poll revents 11
[2022-08-16 22:17:40.434732+0000] [ID: 00000001] virtio_loop: Exit
[root@pq7-0 build]#
```

I believe timestamp is necessary for support. So, this issue should be fixed.

Best Regards,
Yusuke Okada

Comment 17 Yusuke Okada (fujitsu) 2022-08-17 21:29:02 UTC
Hi,

I found that the "%f" specifier became available in glib 2.65.2. It is not available in RHEL8.
https://github.com/GNOME/glib/commit/3b0eca5be69059d920253d0f54e54d9aef4d87d9

Best Regards,
Yusuke Okada

Comment 21 Yusuke Okada (fujitsu) 2022-08-23 18:27:40 UTC
Hi all,

I posted a patch to the community.
Could you help reviewing this patch?

https://lists.gnu.org/archive/html/qemu-devel/2022-08/msg02797.html

Best Regards,
Yusuke Okada

Comment 24 Dr. David Alan Gilbert 2022-09-21 09:39:30 UTC
Yep I can take it; it's an easy backport.  Can someone confirm which version we need this against?
Just 8.8 CentOS Stream or do we need backports/Z/etc?

Comment 25 lijin 2022-09-22 01:59:00 UTC
Change comment#24 to public for now in order to let Fujitsu partner view it and add comment.

Comment 26 Hu Shuai (Fujitsu) 2022-09-22 02:07:29 UTC
(In reply to Dr. David Alan Gilbert from comment #24)
> Yep I can take it; it's an easy backport.  Can someone confirm which version
> we need this against?
> Just 8.8 CentOS Stream or do we need backports/Z/etc?

This fix is important to us, we hope that it can be fixed on RHEL8.7, Thanks a lot.

Best Regards
Shuai Hu

Comment 27 Hu Shuai (Fujitsu) 2022-09-22 07:03:45 UTC
I applied patch https://lists.gnu.org/archive/html/qemu-devel/2022-08/msg03704.html to upstream and the test result looks OK.

virtiofsd version
```
# /usr/libexec/virtiofsd --version
virtiofsd version 7.1.50 (v7.1.0-401-g6338c30111-dirty)
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers
using FUSE kernel interface version 7.36
```

virtiosd log
```
# tail -f fj-kvm-vm-fs1-virtiofsd.log
[2022-09-22 06:50:50.822434+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-09-22 06:50:50.822390+0000] [ID: 00000004] fv_queue_thread: Creating thread pool for Queue 1
[2022-09-22 06:50:50.822508+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-09-22 06:50:50.822538+0000] [ID: 00000004] fv_queue_thread: Start for queue 1 kick_fd 13
[2022-09-22 06:50:50.822555+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-09-22 06:50:50.822641+0000] [ID: 00000004] fv_queue_thread: Waiting for Queue 1 event
[2022-09-22 06:50:50.822669+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-09-22 06:50:50.822690+0000] [ID: 00000004] fv_queue_thread: Got queue event on Queue 1
[2022-09-22 06:50:50.822748+0000] [ID: 00000004] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 0 out: 0
[2022-09-22 06:50:50.822821+0000] [ID: 00000004] fv_queue_thread: Waiting for Queue 1 event
```

Comment 29 Yusuke Okada (fujitsu) 2022-09-27 15:19:11 UTC
Hi,

The fix has been merged in the upstream. Thanks for the help!
https://gitlab.com/qemu-project/qemu/-/commit/f16d15c9276bd8f501f861c39cbd4adc812d0c1d

Could you proceed with the backport?

Thanks,
Yusuke Okada

Comment 30 Dr. David Alan Gilbert 2022-09-27 15:39:52 UTC
Yep, starting with an 8.8 first I think

Comment 44 Yanan Fu 2022-10-08 10:04:16 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 45 xiagao 2022-10-11 06:25:59 UTC
Test with the fixed version, the result is good.
Test pkg:
kernel-4.18.0-429.el8.x86_64
qemu-kvm-common-6.2.0-22.module+el8.8.0+16816+1d3555ec.x86_64

# /usr/libexec/virtiofsd -o source=/home --socket-path=/tmp/sock1 -d
[2022-10-08 05:36:13.012506+0000] [ID: 00034506] virtio_session_mount: Waiting for vhost-user socket connection...
[2022-10-08 05:36:15.151127+0000] [ID: 00034506] virtio_session_mount: Received vhost-user socket connection
[2022-10-08 05:36:15.162869+0000] [ID: 00000001] virtio_loop: Entry
[2022-10-08 05:36:15.162948+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-08 05:36:15.183368+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-08 05:36:15.183415+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-08 05:36:15.183518+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-08 05:36:15.183591+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-08 05:36:15.183688+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-08 05:36:15.183726+0000] [ID: 00000001] virtio_loop: Waiting for VU event

Comment 46 Hu Shuai (Fujitsu) 2022-10-11 07:11:53 UTC
Test this fixed version on aarch64, the result is good too.

Test Env:
kernel-4.18.0-429.el8.aarch64
qemu-kvm-6.2.0-22.module+el8.8.0+16816+1d3555ec.aarch64

Test Result: virtiosd log
```
# tail -f fj-kvm-vm-fs0-virtiofsd.log                                                                                                                                                                            
[2022-10-11 07:06:16.242975+0000] [ID: 00220533] virtio_session_mount: Waiting for vhost-user socket connection...
[2022-10-11 07:06:16.447207+0000] [ID: 00220533] virtio_session_mount: Received vhost-user socket connection
[2022-10-11 07:06:16.535850+0000] [ID: 00000001] virtio_loop: Entry
[2022-10-11 07:06:16.535922+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-11 07:06:16.732589+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-11 07:06:16.732687+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-11 07:06:16.732742+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-11 07:06:16.732782+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-11 07:06:16.732823+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-11 07:06:16.732853+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-11 07:06:16.732877+0000] [ID: 00000001] virtio_loop: Got VU event
```

Comment 49 xiagao 2022-10-12 06:49:52 UTC
Based on comment 45 and comment 46, change status to verified.

Comment 55 errata-xmlrpc 2023-05-16 08:16:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2757


Note You need to log in before you can comment on or make changes to this bug.