Bug 2127397

Summary: nfsrahead - the config file cannot set the read_ahead_kb value as expected
Product: Red Hat Enterprise Linux 8 Reporter: Yongcheng Yang <yoyang>
Component: nfs-utilsAssignee: Steve Dickson <steved>
Status: CLOSED MIGRATED QA Contact: Yongcheng Yang <yoyang>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.7CC: tbecker, xzhou
Target Milestone: rcKeywords: MigratedToJIRA, Reproducer
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-23 11:15:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yongcheng Yang 2022-09-16 07:54:52 UTC
Description of problem:
After the `nfsrahead` introduced via Bug 1946283 the automatic testcase gets to be failed sometimes as the result read_ahead_kb value doesn't match what the config file set.

But it's weird that when I check that by hand it turns to work as expected. Finally I find this problem can be reliablely reproduced in arch s390x. Please help check it.

Version-Release number of selected component (if applicable):
nfs-utils-2.3.3-57.el8

How reproducible:
always in arch s390x

Steps to Reproduce:
1. Configure nfsrahead in /etc/nfs.conf
2. Mount and check read_ahead_kb value
3.


Actual results:
[root@ibm-z-509 ~]# cat /etc/exports
/export_test *(rw,no_root_squash)
[root@ibm-z-509 ~]# cat /etc/nfs.conf
[nfsrahead]
 nfs=15000
 nfs4=16000           <<<<<<<<<<<<<<
 default=128
[root@ibm-z-509 ~]# systemctl restart nfs-server
[root@ibm-z-509 ~]# mount localhost:/export_test/ /mnt_test/ -o vers=3
[root@ibm-z-509 ~]# cat /proc/self/mountinfo | grep test
334 95 0:45 / /mnt_test rw,relatime shared:178 - nfs localhost:/export_test/ rw,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,mountaddr=::1,mountvers=3,mountport=20048,mountproto=udp6,local_lock=none,addr=::1
[root@ibm-z-509 ~]# cat /sys/class/bdi/0\:45/read_ahead_kb
15000
[root@ibm-z-509 ~]# umount /mnt_test/
[root@ibm-z-509 ~]# mount localhost:/export_test/ /mnt_test/ -o vers=4
[root@ibm-z-509 ~]# cat /proc/self/mountinfo | grep test
336 95 0:46 / /mnt_test rw,relatime shared:178 - nfs4 localhost:/export_test rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,clientaddr=::1,local_lock=none,addr=::1
[root@ibm-z-509 ~]# cat /sys/class/bdi/0\:46/read_ahead_kb
128                   <<<<<<<<<<<<<
[root@ibm-z-509 ~]# umount /mnt_test/
[root@ibm-z-509 ~]# rpm -q kernel nfs-utils
kernel-4.18.0-425.el8.s390x
nfs-utils-2.3.3-57.el8.s390x
[root@ibm-z-509 ~]#

Expected results:


Additional info:

Comment 1 Yongcheng Yang 2022-09-16 08:09:09 UTC
Looks like the "nfs4" and "default" settings are messing up.

[root@ibm-z-509 ~]# cat /etc/nfs.conf
[nfsrahead]
 nfs=15000
 default=256                    <<<<<<<<<<<<<< IMO the v4 mount use this default value
[root@ibm-z-509 ~]# systemctl restart nfs-server
[root@ibm-z-509 ~]# mount localhost:/export_test/ /mnt_test/ -o vers=4
[root@ibm-z-509 ~]# cat /proc/self/mountinfo | grep test
336 95 0:46 / /mnt_test rw,relatime shared:178 - nfs4 localhost:/export_test rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=2$5,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,clientaddr=::1,local_lock=none,addr=::1
[root@ibm-z-509 ~]# cat /sys/class/bdi/0\:46/read_ahead_kb
128                             <<<<<<<<<<<<<<
[root@ibm-z-509 ~]# umount /mnt_test/
[root@ibm-z-509 ~]#



[root@ibm-z-509 ~]# vi /etc/nfs.conf
[root@ibm-z-509 ~]# cat /etc/nfs.conf
[nfsrahead]
 nfs=256
 nfs4=16000                    <<<<<<<<<<<<<<
[root@ibm-z-509 ~]# systemctl restart nfs-server
[root@ibm-z-509 ~]# mount localhost:/export_test/ /mnt_test/ -o vers=4
[root@ibm-z-509 ~]# cat /proc/self/mountinfo | grep test
336 95 0:46 / /mnt_test rw,relatime shared:178 - nfs4 localhost:/export_test rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=2$5,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,clientaddr=::1,local_lock=none,addr=::1
[root@ibm-z-509 ~]# cat /sys/class/bdi/0\:46/read_ahead_kb
128                             <<<<<<<<<<<<<<
[root@ibm-z-509 ~]# umount /mnt_test/
[root@ibm-z-509 ~]# mount localhost:/export_test/ /mnt_test/ -o vers=3
[root@ibm-z-509 ~]# cat /proc/self/mountinfo | grep test
334 95 0:45 / /mnt_test rw,relatime shared:178 - nfs localhost:/export_test/ rw,vers=3,rsize=1048576,wsize=1048576,namlen=255,
hard,proto=tcp6,timeo=600,retrans=2,sec=sys,mountaddr=::1,mountvers=3,mountport=20048,mountproto=udp6,local_lock=none,addr=::1
[root@ibm-z-509 ~]# cat /sys/class/bdi/0\:45/read_ahead_kb
256
[root@ibm-z-509 ~]# umount /mnt_test/


[root@ibm-z-509 ~]# vi /etc/nfs.conf
[root@ibm-z-509 ~]# cat /etc/nfs.conf
[nfsrahead]
 default=256                    <<<<<<<<<<<<<< IMO both v3 and v4 mount use this default value
[root@ibm-z-509 ~]# systemctl restart nfs-server
[root@ibm-z-509 ~]# mount localhost:/export_test/ /mnt_test/ -o vers=3
[root@ibm-z-509 ~]# cat /proc/self/mountinfo | grep test
334 95 0:45 / /mnt_test rw,relatime shared:178 - nfs localhost:/export_test/ rw,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,mountaddr=::1,mountvers=3,mountport=20048,mountproto=udp6,local_lock=none,addr=::1
[root@ibm-z-509 ~]# cat /sys/class/bdi/0\:45/read_ahead_kb
256
[root@ibm-z-509 ~]# umount /mnt_test/
[root@ibm-z-509 ~]# mount localhost:/export_test/ /mnt_test/ -o vers=4
[root@ibm-z-509 ~]# cat /proc/self/mountinfo | grep test
336 95 0:46 / /mnt_test rw,relatime shared:178 - nfs4 localhost:/export_test rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,clientaddr=::1,local_lock=none,addr=::1
[root@ibm-z-509 ~]# cat /sys/class/bdi/0\:46/read_ahead_kb
128                             <<<<<<<<<<<<<<
[root@ibm-z-509 ~]# umount /mnt_test/
[root@ibm-z-509 ~]#

Comment 2 Yongcheng Yang 2022-10-20 02:19:14 UTC
Looks like there're more than one issue for now:

1. Within KVM hypervisor it fails to config the nfsrahead value randomly, e.g.
 - fail in KVM https://beaker.engineering.redhat.com/jobs/7137086
 - pass in non-KVM https://beaker.engineering.redhat.com/jobs/7137194


2. In arch s390x, the v4 config doesn't work and seemingly it always use the default value "128":
 - https://beaker.engineering.redhat.com/recipes/12794496
 ...
[09:52:08 root@ ~~]# mount -t nfs localhost:/exportdir /mnt/nfsmp -o vers=4.2
[09:52:08 root@ ~~]# cat /proc/self/mountinfo | grep /mnt/nfsmp
334 95 0:47 / /mnt/nfsmp rw,relatime shared:176 - nfs4 localhost:/exportdir rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,clientaddr=::1,local_lock=none,addr=::1
[09:52:08 root@ ~~]# /usr/libexec/nfsrahead -F -d 0:47
nfsrahead: setting /mnt/nfsmp readahead to 16000

16000
[09:52:08 root@ ~~]# cat /sys/class/bdi/0:47/read_ahead_kb
128


3. In arch ppc64le, sometimes the effect value is a bit lower than what we set:
 - https://beaker.engineering.redhat.com/recipes/12794501
 ...
[11:47:36 root@ ~~]# mount -t nfs localhost:/exportdir /mnt/nfsmp -o vers=3
[11:47:36 root@ ~~]# cat /proc/self/mountinfo | grep /mnt/nfsmp
413 96 0:46 / /mnt/nfsmp rw,relatime shared:217 - nfs localhost:/exportdir rw,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,mountaddr=::1,mountvers=3,mountport=20048,mountproto=udp6,local_lock=none,addr=::1
[11:47:36 root@ ~~]# /usr/libexec/nfsrahead -F -d 0:46
nfsrahead: setting /mnt/nfsmp readahead to 15000

15000
[11:47:37 root@ ~~]# cat /sys/class/bdi/0:46/read_ahead_kb
14976

Comment 4 RHEL Program Management 2023-09-23 11:15:26 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 5 RHEL Program Management 2023-09-23 11:15:54 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.