Red Hat Bugzilla – Bug 837867
fedora-storage-init usually fails with a complaint about being unable to get a lock
Last modified: 2013-02-13 21:16:07 EST
Description of problem: On 5 of my 7 Fedora 16 systems, systemctl reports
that the fedora-storage-init.service is in a failed state.
Version-Release number of selected component (if applicable):
systemd-37-25.fc16.x86_64 and initscripts-9.34.2-1.fc16.x86_64
How reproducible: Boot up a Fedora 16 system using LVM (and perhaps MD raid could be a factor)
Steps to Reproduce:
1. Boot system
2. Run "systemctl --failed"
3. The system seems to work properly.
fedora-storage-init.service loaded failed failed Initialize storage subsystems (RAID, LVM, etc.)
It should succeed.
Here are the typical error messages from dmesg:
bash-4.2$ dmesg | grep fedora-storage-init
[ 10.891116] fedora-storage-init: Setting up Logical Volume Management: /var/lock/lvm/V_vg_sys:aux: open failed: No such file or directory
[ 10.907350] fedora-storage-init: Can't get lock for vg_sys
[ 10.915434] fedora-storage-init: [FAILED]
[ 10.933791] systemd: fedora-storage-init.service: main process exited, code=exited, status=5
[ 10.957829] systemd: Unit fedora-storage-init.service entered failed state.
[ 11.614714] fedora-storage-init: Setting up Logical Volume Management: 4 logical volume(s) in volume group "vg_sys" now active
[ 11.628506] fedora-storage-init: [ OK ]
bash-4.2$ systemctl status fedora-storage-init.service
fedora-storage-init.service - Initialize storage subsystems (RAID, LVM, etc.)
Loaded: loaded (/lib/systemd/system/fedora-storage-init.service; static)
Active: failed since Tue, 03 Jul 2012 22:04:16 -0400; 1 day and 14h ago
Main PID: 953 (code=exited, status=5)
--sysinit is supposed to cover whatever locking needs we have.
Do you have a /var/lock/lvm? What is your filesystem layout?
Yes, but that directory is empty:
[root@ti86 ~]# ls -al /var/lock/lvm
drwx------ 2 root root 40 Jul 5 08:44 .
drwxr-xr-x 8 root root 160 Jul 5 03:14 ..
[root@ti86 ~]# df -lh
Filesystem Size Used Avail Use% Mounted on
rootfs 1.2G 952M 140M 88% /
devtmpfs 1.5G 8.0K 1.5G 1% /dev
tmpfs 1.5G 96K 1.5G 1% /dev/shm
/dev/mapper/vg_os-f16_root 1.2G 952M 140M 88% /
tmpfs 1.5G 49M 1.5G 4% /run
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
tmpfs 1.5G 0 1.5G 0% /media
/tmp 1.5G 5.4M 1.5G 1% /tmp
/dev/mapper/vg_os-extra_disk 23G 749M 23G 4% /extra_disk
/dev/mapper/vg_os-f16_var 1.7G 665M 913M 43% /var
tmpfs 1.5G 49M 1.5G 4% /var/run
/dev/md11 463M 339M 101M 78% /boot
/dev/mapper/vg_os-f16_usr 20G 8.7G 11G 46% /usr
[root@ti86 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
extra_disk vg_os -wi-ao 23.00g
f16_root vg_os -wi-ao 1.12g
f16_usr vg_os -wi-ao 20.00g
f16_var vg_os -wi-ao 1.62g
[root@ti86 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_os 1 4 0 wz--n- 68.00g 22.25g
[root@ti86 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md13 vg_os lvm2 a-- 68.00g 22.25g
[root@ti86 ~]# cat /proc/mdstat
Personalities : [raid1]
md13 : active raid1 sdb3 sda3
71369216 blocks [2/2] [UU]
md11 : active raid1 sdb1 sda1
488384 blocks [2/2] [UU]
md12 : active raid1 sda2 sdb2
6291392 blocks [2/2] [UU]
unused devices: <none>
As I mentioned, this is inconsistent. Sometimes it succeeds, but it seems to fail most of the time. I guess there's a race condition somewhere.
Please, try setting log/file="/var/run/lvm2.log" and log/level=7 in /etc/lvm/lvm.conf and try to reboot. We should have the log then, including the lvm commands called on boot. If you manage to get the log while the problem occurs, please attach the log here for further analysis. Thanks.
Created attachment 598677 [details]
LVM2 log file showing failure of fedora-storage-init.service
As requested, here is the log file. I hope this helps.
I see the same problem
[ 11.446764] fedora-storage-init: Setting up Logical Volume Management: /var/lock/lvm/V_VolGroup02:aux: open failed: No such file or directory
[ 11.446893] fedora-storage-init: Can't get lock for VolGroup02
[ 11.446984] fedora-storage-init: /var/lock/lvm/V_VolGroup01:aux: open failed: No such file or directory
[ 11.447103] fedora-storage-init: Can't get lock for VolGroup01
[ 11.447188] fedora-storage-init: /var/lock/lvm/V_VolGroup00:aux: open failed: No such file or directory
[ 11.447307] fedora-storage-init: Can't get lock for VolGroup00
[ 11.494643] fedora-storage-init: [FAILED]
[ 11.494940] systemd: fedora-storage-init-late.service: main process exited, code=exited, status=5
[ 11.507823] systemd: Unit fedora-storage-init-late.service entered failed state.
[ 11.740382] lvm: 1 logical volume(s) in volume group "VolGroup02" monitored
[ 11.742503] lvm: 10 logical volume(s) in volume group "VolGroup01" monitored
[ 11.744261] lvm: 9 logical volume(s) in volume group "VolGroup00" monitored
But all seems to work ok
This message is a reminder that Fedora 16 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 16. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora
'version' of '16'.
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version prior to Fedora 16's end of life.
Bug Reporter: Thank you for reporting this issue and we are sorry that
we may not be able to fix it before Fedora 16 is end of life. If you
would still like to see this bug fixed and are able to reproduce it
against a later version of Fedora, you are encouraged to click on
"Clone This Bug" and open it against that version of Fedora.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
The process we are following is described here:
Fedora 16 changed to end-of-life (EOL) status on 2013-02-12. Fedora 16 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version.
Thank you for reporting this bug and we are sorry it could not be fixed.