DescriptionNorbert Jurkeit
2018-05-10 13:31:14 UTC
Created attachment 1434359[details]
logs from systemctl and journalctl commands
My /home file system is located in a logical volume whose physical volume is located on top of a MD RAID level 1 (mirror). /home is pulled in during boot by an entry in /etc/fstab:
/dev/mapper/vg00-home /home ext4 defaults 1 2
Sometimes during boot this message is displayed:
Failed to start LVM2 PV scan on device 9:0
See 'systemctl status lvm2-pvscan@9:0.service' for details
Next the attempt to mount /home times out and a rescue shell can be entered.
The outputs of the mentioned systemctl command and the more complete journalctl command are included in the attached file. It seems that pvscan gets confused because the PV can be found not only in RAID device md0, but also in its components sda7 and sdb7 (although parameter "md_component_detection" is set to 1 in /etc/lvm/lvm.conf).
When I repeat the failed pvscan command in the rescue shell it succeeds without complaint and the boot process can be finished.
The journal excerpt looks quite similar in case of a successful boot except that the multiple PV occurrence is not considered fatal this time.
These things have happened also with earlier Fedora versions, but very seldom. Now with Fedora 28 about every 2nd boot fails on my oldest desktop PC, and on my newer desktop PC with x86_64 software but otherwise similar configuration the issue has already shown up once since installation of Fedora 28 a few days ago.
In an attempt to fix the issue I specified "filter" and "global_filter" parameters in lvm.conf to allow only /dev/md0 to be accessed, but without success. Apparently those filters prevent devices only from being used, but not from being scanned (what is the rationale behind this?).
Finally I changed parameter "scan" in lvm.conf from "/dev" to "/dev/md" and that fixes the issue for me.
I can live with this workaround but you might want to have a look at this strange behavior as probably other installations are affected, too.
Since upgrade to lvm2-2.02.177-5.fc28.x86_64.rpm the problem has no more occurred, even after removal of my workaround. Apparently the same issue as described in bug 1589444.
Created attachment 1434359 [details] logs from systemctl and journalctl commands My /home file system is located in a logical volume whose physical volume is located on top of a MD RAID level 1 (mirror). /home is pulled in during boot by an entry in /etc/fstab: /dev/mapper/vg00-home /home ext4 defaults 1 2 Sometimes during boot this message is displayed: Failed to start LVM2 PV scan on device 9:0 See 'systemctl status lvm2-pvscan@9:0.service' for details Next the attempt to mount /home times out and a rescue shell can be entered. The outputs of the mentioned systemctl command and the more complete journalctl command are included in the attached file. It seems that pvscan gets confused because the PV can be found not only in RAID device md0, but also in its components sda7 and sdb7 (although parameter "md_component_detection" is set to 1 in /etc/lvm/lvm.conf). When I repeat the failed pvscan command in the rescue shell it succeeds without complaint and the boot process can be finished. The journal excerpt looks quite similar in case of a successful boot except that the multiple PV occurrence is not considered fatal this time. These things have happened also with earlier Fedora versions, but very seldom. Now with Fedora 28 about every 2nd boot fails on my oldest desktop PC, and on my newer desktop PC with x86_64 software but otherwise similar configuration the issue has already shown up once since installation of Fedora 28 a few days ago. In an attempt to fix the issue I specified "filter" and "global_filter" parameters in lvm.conf to allow only /dev/md0 to be accessed, but without success. Apparently those filters prevent devices only from being used, but not from being scanned (what is the rationale behind this?). Finally I changed parameter "scan" in lvm.conf from "/dev" to "/dev/md" and that fixes the issue for me. I can live with this workaround but you might want to have a look at this strange behavior as probably other installations are affected, too.