Bug 1077095 - Volume group not found on boot occasionally
Summary: Volume group not found on boot occasionally
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 20
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ondrej Kozina
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-03-17 08:45 UTC by Kamil Páral
Modified: 2015-05-29 13:54 UTC (History)
11 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-05-29 13:54:01 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
failed boot - screenshot (305.19 KB, image/jpeg)
2014-03-17 08:46 UTC, Kamil Páral
no flags Details
failed boot - rdsosreport (90.60 KB, text/plain)
2014-03-17 08:47 UTC, Kamil Páral
no flags Details
failed boot - journal (80.01 KB, text/plain)
2014-03-17 08:47 UTC, Kamil Páral
no flags Details
proper boot - journal (203.74 KB, text/plain)
2014-03-17 08:50 UTC, Kamil Páral
no flags Details

Description Kamil Páral 2014-03-17 08:45:49 UTC
Description of problem:
Sometimes (once per week, or once per two weeks) my system does not boot, because my volume group is not found on boot. The boot waits with the message:

> Started Cryptography Setup for luks-73c97d09-6d02-460c-9c66-94cbd86476d6.

After several minutes, it times out and I see 

> Warning: Could not boot.

and a dracut rescue shell. If I simply reboot, the next boot works perfectly fine. The disk layout is fine, but there's some problem in or around lvm.

I have attached dracut sos report, journal and a screenshot of the failed boot. This is probably the most important part:

[   11.794438] medusa systemd-cryptsetup[289]: Set cipher aes, mode xts-plain64, key size 512 bits for device /dev/disk/by-uuid/73c97d09-6d02-460c-9c66-94cbd86476d6.
[   13.983354] medusa kernel: bio: create slab <bio-1> at 1
[   14.142971] medusa kernel: bio: create slab <bio-1> at 1
[   14.231950] medusa systemd[1]: Found device /dev/mapper/luks-73c97d09-6d02-460c-9c66-94cbd86476d6.
[   14.232113] medusa systemd[1]: Started Cryptography Setup for luks-73c97d09-6d02-460c-9c66-94cbd86476d6.
[   14.785466] medusa dracut-initqueue[233]: Scanning devices dm-0  for LVM logical volumes medusa_ssd/lv_root medusa_ssd/lv_root medusa/lv_swap
[   14.860170] medusa dracut-initqueue[233]: inactive '/dev/medusa/lv_swap' [2.00 GiB] inherit
[   14.860408] medusa dracut-initqueue[233]: inactive '/dev/medusa/virt_rhel6' [10.00 GiB] inherit
[   14.860600] medusa dracut-initqueue[233]: inactive '/dev/medusa/lv_data' [200.00 GiB] inherit
[   14.882319] medusa dracut-initqueue[233]: Volume group "medusa_ssd" not found
[   14.882762] medusa dracut-initqueue[233]: Skipping volume group medusa_ssd
[   14.995633] medusa systemd[1]: Started Cryptography Setup for luks-73c97d09-6d02-460c-9c66-94cbd86476d6.
[   33.748927] medusa systemd[1]: Received SIGRTMIN+20 from PID 238 (plymouthd).
[  138.465816] medusa dracut-initqueue[233]: Scanning devices dm-0  for LVM logical volumes medusa_ssd/lv_root medusa_ssd/lv_root medusa/lv_swap
[  138.485963] medusa dracut-initqueue[233]: ACTIVE '/dev/medusa/lv_swap' [2.00 GiB] inherit
[  138.486348] medusa dracut-initqueue[233]: inactive '/dev/medusa/virt_rhel6' [10.00 GiB] inherit
[  138.486721] medusa dracut-initqueue[233]: inactive '/dev/medusa/lv_data' [200.00 GiB] inherit
[  138.489470] medusa dracut-initqueue[233]: PARTIAL MODE. Incomplete logical volumes will be processed.
[  138.508339] medusa dracut-initqueue[233]: Volume group "medusa_ssd" not found
[  138.508665] medusa dracut-initqueue[233]: Skipping volume group medusa_ssd
[  199.743379] medusa dracut-initqueue[233]: Warning: Could not boot.
[  199.746377] medusa systemd[1]: Received SIGRTMIN+20 from PID 238 (plymouthd).
[  199.749379] medusa dracut-initqueue[233]: Warning: /dev/mapper/medusa_ssd-lv_root does not exist
[  199.750175] medusa dracut-initqueue[233]: Warning: /dev/mapper/medusa_ssd-lv_root does not exist
[  199.750780] medusa dracut-initqueue[233]: Warning: /dev/mapper/medusa_ssd-lv_root does not exist
[  199.751256] medusa dracut-initqueue[233]: Warning: /dev/medusa_ssd/lv_root does not exist
[  199.751958] medusa dracut-initqueue[233]: Warning: /dev/medusa_ssd/lv_root does not exist
[  199.760086] medusa systemd[1]: Starting Dracut Emergency Shell...
[  199.783264] medusa systemd[1]: Received SIGRTMIN+21 from PID 238 (plymouthd).


This is how the disk layout _should look like_ (taken from a properly booted system):

> $ lsblk
> NAME                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> sda                                             8:0    0 298.1G  0 disk  
> └─sda3                                          8:3    0   293G  0 part  
>   └─luks-73c97d09-6d02-460c-9c66-94cbd86476d6 253:1    0   293G  0 crypt 
>     ├─medusa-lv_swap                          253:3    0     2G  0 lvm   
>     ├─medusa-virt_rhel6                       253:4    0    10G  0 lvm   
>     └─medusa-lv_data                          253:5    0   200G  0 lvm   /mnt/data
> sdb                                             8:16   0 111.8G  0 disk  
> ├─sdb1                                          8:17   0   200M  0 part  /boot/efi
> ├─sdb2                                          8:18   0   750M  0 part  /boot
> └─sdb3                                          8:19   0 110.9G  0 part  
>   └─luks-a3249b9e-3b9e-45b4-82ee-1cac21523253 253:0    0 110.9G  0 crypt 
>     └─medusa_ssd-lv_root                      253:2    0 110.9G  0 lvm   /


> $ sudo blkid
> /dev/sda3: UUID="73c97d09-6d02-460c-9c66-94cbd86476d6" TYPE="crypto_LUKS" PARTUUID="41aa5342-561a-4dbf-a6a1-79d28ef0f53f" 
> /dev/sdb1: SEC_TYPE="msdos" UUID="7B93-71A9" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="aa77ea48-936a-4428-9a20-8505940b2587" 
> /dev/sdb2: UUID="cd1d6514-e7c5-402f-966f-27e978f4bb92" TYPE="ext4" PARTUUID="3330de9a-def1-4518-b890-7faaa1dd6e38" 
> /dev/sdb3: UUID="a3249b9e-3b9e-45b4-82ee-1cac21523253" TYPE="crypto_LUKS" PARTUUID="6465ce7b-3494-4405-bf4d-561eab1fd97d" 
> /dev/mapper/luks-a3249b9e-3b9e-45b4-82ee-1cac21523253: UUID="2IEcX8-92ih-0PuA-Rrs9-CwAe-1Z4V-B2ltkp" TYPE="LVM2_member" 
> /dev/mapper/luks-73c97d09-6d02-460c-9c66-94cbd86476d6: UUID="t5izXa-ZF5Z-ey8X-tM59-s6js-vPdq-2zVXOp" TYPE="LVM2_member" 
> /dev/mapper/medusa_ssd-lv_root: UUID="64c1467a-e0d9-4d2d-a9a0-b664dda9fa8f" TYPE="ext4" 
> /dev/mapper/medusa-lv_swap: UUID="fa344f25-bcd9-4c81-84eb-5ddd18d61245" TYPE="swap" 
> /dev/mapper/medusa-lv_data: UUID="c242ad79-fe25-40b9-b727-6808ee034ea7" TYPE="ext4" 
> /dev/mapper/medusa-virt_rhel6: PTUUID="000addea" PTTYPE="dos" 

> $ sudo pvs
>   PV                                                    VG         Fmt  Attr PSize   PFree 
>   /dev/mapper/luks-73c97d09-6d02-460c-9c66-94cbd86476d6 medusa     lvm2 a--  292.94g 80.94g
>   /dev/mapper/luks-a3249b9e-3b9e-45b4-82ee-1cac21523253 medusa_ssd lvm2 a--  110.86g     0 

> $ sudo vgs
>   VG         #PV #LV #SN Attr   VSize   VFree 
>   medusa       1   3   0 wz--n- 292.94g 80.94g
>   medusa_ssd   1   1   0 wz--n- 110.86g     0 

> $ sudo lvs
>   LV         VG         Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
>   lv_data    medusa     -wi-ao---- 200.00g                                             
>   lv_swap    medusa     -wi-a-----   2.00g                                             
>   virt_rhel6 medusa     -wi-a-----  10.00g                                             
>   lv_root    medusa_ssd -wi-ao---- 110.86g                                             

> $ sudo dmsetup ls --tree
> medusa_ssd-lv_root (253:2)
>  └─luks-a3249b9e-3b9e-45b4-82ee-1cac21523253 (253:0)
>     └─ (8:19)
> medusa-virt_rhel6 (253:4)
>  └─luks-73c97d09-6d02-460c-9c66-94cbd86476d6 (253:1)
>     └─ (8:3)
> medusa-lv_swap (253:3)
>  └─luks-73c97d09-6d02-460c-9c66-94cbd86476d6 (253:1)
>     └─ (8:3)
> medusa-lv_data (253:5)
>  └─luks-73c97d09-6d02-460c-9c66-94cbd86476d6 (253:1)
>     └─ (8:3)


Version-Release number of selected component (if applicable):
kernel-3.13.6-200.fc20.x86_64
llvm-libs-3.3-4.fc20.x86_64
lvm2-2.02.103-5.fc20.x86_64
lvm2-libs-2.02.103-5.fc20.x86_64
cryptsetup-1.6.4-1.fc20.x86_64
cryptsetup-libs-1.6.4-1.fc20.x86_64
dracut-034-64.git20131205.fc20.1.x86_64
dracut-config-rescue-034-64.git20131205.fc20.1.x86_64
dracut-network-034-64.git20131205.fc20.1.x86_64

How reproducible:
rarely (once in one or two weeks)

Steps to Reproduce:
1. simply try to boot and provide a password for my encrypted disks. system then waits and fails to boot in the end

Comment 1 Kamil Páral 2014-03-17 08:46:44 UTC
Created attachment 875377 [details]
failed boot - screenshot

Comment 2 Kamil Páral 2014-03-17 08:47:07 UTC
Created attachment 875378 [details]
failed boot - rdsosreport

Comment 3 Kamil Páral 2014-03-17 08:47:26 UTC
Created attachment 875379 [details]
failed boot - journal

Comment 4 Kamil Páral 2014-03-17 08:50:45 UTC
Created attachment 875380 [details]
proper boot - journal

Comment 5 Kamil Páral 2014-06-12 15:16:39 UTC
Zdenek, I haven't seen this problem in quite some time (two months?). I guess some update must have fixed it, or maybe I stopped having bad luck :)

Comment 6 Fedora End Of Life 2015-05-29 11:16:52 UTC
This message is a reminder that Fedora 20 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 20. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '20'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 20 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 7 Kamil Páral 2015-05-29 13:54:01 UTC
I no longer have this problem with F21.


Note You need to log in before you can comment on or make changes to this bug.