Hide Forgot
This bug has been copied from bug #1676612 and has been proposed to be backported to 7.6 z-stream (EUS).
Looks like this will get SanityOnly testing from our QE, so a fix verification is up to OpenShift. Niels, could you please try with 7.6.z scratch build before handing over to QE? http://brew-task-repos.usersys.redhat.com/repos/scratch/mcsontos/lvm2/2.02.180/10.el7_6.3.bz1688316/lvm2-2.02.180-10.el7_6.3.bz1688316-scratch.repo Scratch build for reference: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=20576533
Hi Marian, I will not be available tomorrow and next week. Saravana, could you try this out (and include the change for bug 1676466) in a scratch build? Stracing pvscan in the rhgs-server container should not show any open() or read() calls of /run/udev....something.
Created attachment 1545147 [details] udev disabled in lvm.conf and lvm rpms installed (In reply to Niels de Vos from comment #4) > Hi Marian, I will not be available tomorrow and next week. > > Saravana, could you try this out (and include the change for bug 1676466) in > a scratch build? Stracing pvscan in the rhgs-server container should not > show any open() or read() calls of /run/udev....something. I have made a scratch build with the rpms mentioned and disabled obtain_device_list_from_udev. Refer without_udev.txt for: 1. rpms installed 2. udev disabled in /etc/lvm/lvm.conf 3. strace output. I could still see access to /run/udev/control file. Please check the attached file. munmap(0x7f9cd8033000, 4096) = 0 access("/run/udev/control", F_OK) = -1 ENOENT (No such file or directory) stat("/dev", {st_mode=S_IFDIR|0755, st_size=3460, ...}) = 0 PS: 1. There is NO corresponding rpm for device-mapper-persistent-data here. so using the existing one. (Refer the rpm output) http://brew-task-repos.usersys.redhat.com/repos/scratch/mcsontos/lvm2/2.02.180/10.el7_6.3.bz1688316/x86_64/ 2. I run the docker container as below - FYI. docker run --privileged=true --net=host -v /etc/glusterfs:/etc/glusterfs:z -v /var/lib/glusterd:/var/lib/glusterd:z -v /var/log/glusterfs:/var/log/glusterfs:z -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /dev:/dev -e TCMU_LOGDIR=/var/log/glusterfs/gluster-block -e GB_GLFS_LRU_COUNT=15 --name udev-disabled-gluster <image name>
Created attachment 1545148 [details] udev enabled and latest lvm rpms installed For the sake of comparison, I have left obtain_device_list_from_udev as is (ie., enabled) and with latest lvm rpms installed, took strace output of pvscan in the attached.
The original bug reports a slow down when running lvm operations, due to waiting for udev database: > Description of problem: > When running lvm commands (pvscan and the like) in a container, progress is really slow > and can take many hours (depending on the number of devices and possibly other factors). > > While running 'pvs' messages like these are printed: > > WARNING: Device /dev/xvda2 not initialized in udev database even after waiting 10000000 microseconds. > WARNING: Device /dev/xvdb1 not initialized in udev database even after waiting 10000000 microseconds. I am not sure at all, if the mentioned udev/control access is safe, or something what needs to be further investigated. Peter, you might know more about this... Are the activation/udev_sync and activation/udev_rules configuration options set?
(In reply to Marian Csontos from comment #7) > > I am not sure at all, if the mentioned udev/control access is safe, or > something what needs to be further investigated. Peter, you might know more > about this... > That's OK and expected - that part comes from libudev initialization actually. We're using libudev at libdm initialization (which in turn is used by LVM tools) to collect information about system's environment and see if udev is available and check this once per whole LVM tool execution. The activation/udev_sync, activation/udev_rules and devices/obtain_device_list_from_udev doesn't have effect on whether we use libudev or not. We're still using libudev in all cases to check system's environment and udev's state. This is because you can override lvm.conf settings via --config and/or you can run lvm commands in LVM shell where you can define that --config for each command setting various udev behaviour. So we check for global udev state once to not repeat that uselessly. Anyway, this part is not a problem (doesn't cause any hangs or so).
However, I do not see any access to udev database in either of then log files. Checked the code, and not only the settings have no effect on whether we initialize the udev, but vice versa, when udev is not available, the above settings have no effect and udev shall not be used. Previously the two udev calls were unconditional. Now the problematic udev DB access is properly using conditional `obtain_device_list_from_udev` which is initialised to 0 when `udev_is_running()` fails (and in other cases). I am preparing a 7.6.z errata. Thanks
*** Bug 1695732 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0814