Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2132391

Summary: [virtiofs] virtiofsd debug log's timestamp is NULL [rhel-8.7.0.z]
Product: Red Hat Enterprise Linux 8 Reporter: RHEL Program Management Team <pgm-rhel-tools>
Component: qemu-kvmAssignee: Dr. David Alan Gilbert <dgilbert>
qemu-kvm sub component: virtio-fs QA Contact: xiagao
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: urgent CC: dgilbert, hshiina, hshuai, hyasuhar, kkiwi, kwolf, lijin, mrezanin, slopezpa, stefanha, timao, tumeya, vgoyal, virt-maint, ymankad, yusokada
Version: 8.6Keywords: Reopened, Triaged, ZStream
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-6.2.0-20.module+el8.7.0+16905+efca5d32.2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 2018885 Environment:
Last Closed: 2022-11-08 11:30:28 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2018885    
Bug Blocks:    

Comment 4 Hu Shuai (Fujitsu) 2022-10-14 08:53:11 UTC
Test this fixed version on aarch64, the result is good.

Test Env:
kernel-4.18.0-425.3.1.el8.aarch64
qemu-kvm-6.2.0-20.module+el8.7.0+16905+efca5d32.2.aarch64

Test Result: virtiosd log
```
# tail -f fj-kvm-vm-fs1-virtiofsd.log
[2022-10-14 08:42:45.384253+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-14 08:42:45.384257+0000] [ID: 00000004] fv_queue_thread: Creating thread pool for Queue 1
[2022-10-14 08:42:45.384276+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-14 08:42:45.384317+0000] [ID: 00000004] fv_queue_thread: Start for queue 1 kick_fd 12
[2022-10-14 08:42:45.384326+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-14 08:42:45.384327+0000] [ID: 00000004] fv_queue_thread: Waiting for Queue 1 event
[2022-10-14 08:42:45.384343+0000] [ID: 00000004] fv_queue_thread: Got queue event on Queue 1
[2022-10-14 08:42:45.384348+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-14 08:42:45.384353+0000] [ID: 00000004] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 0 out: 0
[2022-10-14 08:42:45.384364+0000] [ID: 00000004] fv_queue_thread: Waiting for Queue 1 event
```

Comment 5 Yanan Fu 2022-10-16 14:44:15 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 6 xiagao 2022-10-17 07:22:57 UTC
Test this fixed version on x86_64, the result is good.

Test Env:
kernel-4.18.0-425.3.1.el8.x86_64
qemu-kvm-6.2.0-20.module+el8.7.0+16905+efca5d32.2.x86_64


Test Result: virtiosd log
```
/usr/libexec/virtiofsd --socket-path=/var/tmp/avocado_r92w72cc/avocado-vt-vm1-fs-virtiofsd.sock -o source=/root/avocado/data/avocado-vt/virtio_fs_test/ -d -o cache=always.
[2022-10-17 07:15:45.505366+0000] [ID: 00032119] virtio_session_mount: Waiting for vhost-user socket connection...
[2022-10-17 07:15:47.670114+0000] [ID: 00032119] virtio_session_mount: Received vhost-user socket connection
[2022-10-17 07:15:47.679403+0000] [ID: 00000001] virtio_loop: Entry
[2022-10-17 07:15:47.679475+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-17 07:15:47.724770+0000] [ID: 00000001] virtio_loop: Got VU event
[2022-10-17 07:15:47.724824+0000] [ID: 00000001] virtio_loop: Waiting for VU event
[2022-10-17 07:15:47.724899+0000] [ID: 00000001] virtio_loop: Got VU event
```

Comment 12 xiagao 2022-10-18 02:25:18 UTC
Based on comment 4 and comment 6, set bug status to verified.

Comment 16 errata-xmlrpc 2022-11-08 11:30:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:rhel and virt-devel:rhel bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:7820