Running this udev rule: /usr/lib/udev/rules.d/99-nfs.rules SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="/usr/libexec/nfsrahead %k", ATTR{read_ahead_kb}="%c" doesn't look like ideal plan. i.e. for dm/lvm2 - devices in 'ADD' event are tableless and basically non-existing - they become real with first resume event - which is accompanied with 'CHANGE' event. Doing anything with a device on ADD event is thus really senseless. There are other subsystems like md/mdadm - with similar issues - and likely number of others. So for such devices this only generate unnecessary slowdown of udev rule processing (and basically always ending with some sort of error like: 253:2: Process '/usr/libexec/nfsrahead 253:2' failed with exit code 2.) There is also good question how such idea as making 'readahead' tunable inside /etc/nfs.conf is meant to play with logic of 'tuned'. And it's also forth to mention the lvm2 is capable of calculating and estimating some good defaults for each device knowing device characteristics and alignments. Such rule should probably be 'runtime' enabled in case user is running 'nfsd' with some disksetup. Otherwise on systems with large amount of devices being active added/removed (every LV activation) and are quite likely completely unrelated to nfsd - we just adding another 'fork noise' to the chain. So some better solution is needed here. (For the start removal of the rule and just plain setting of read-ahead when nfsd is started would be most likely better way)
This bug appears to have been reported against 'rawhide' during the Fedora Linux 38 development cycle. Changing version to 38.
nfsrahead doesn't set read ahead for nfsd, but for NFS clients. nfsd will remain using the disk readahead. When a new mount is added, the UDEV event triggers nfsrahead to set the readahead accordingly. I understand that it is messy, and other options were attempted before settling on this one. [1] is the first attempt, using a mount parameter that stays in userspace. A second attempt, using a kernel mount option, was attempted in [2]. Both were denied, and the last option was to use the udev events, which is the currently implemented version. > Otherwise on systems with large amount of devices being active added/removed (every LV activation) and are quite likely completely unrelated to nfsd - we just adding another 'fork noise' to the chain. This may be an issue. How frequent is a setup that is penalized from the current solution? How much is the penalty? How can we quantify this issue? We'll need to bring this discussion upstream, and so we need to make a case for this solution not being optimal. [1] https://patchwork.kernel.org/project/linux-nfs/patch/20210803130717.2890565-1-trbecker@gmail.com/ [2] https://marc.info/?l=linux-nfs&m=162870205319008