Bug 989607

Summary: lvm group is not activated at startup
Product: [Fedora] Fedora Reporter: Pietpiet <pietpiet>
Component: lvm2Assignee: Peter Rajnoha <prajnoha>
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 20CC: agk, arseniev, bmarzins, bmr, dwysocha, heinzm, jonathan, lvm-team, msnitzer, pallas, pietpiet, prajnoha, prockai, rtresidd, sergio.pasra, zkabelac
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-10-29 08:20:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logfile none

Description Pietpiet 2013-07-29 15:58:44 UTC
Description of problem:
I have a lvm volume mounted in my home directory. After updating from fedora 18 to 19 using fedup this volume is not mounted at start-up any more. The problem seems to be that the logical group is not activated. Running "vgchange -a y" manually as root results in the volume appearing in /dev/mapper, and lets me mount the volume. 

Not sure which logs I should add.

Version-Release number of selected component (if applicable):

lvm2 2.02.98-10

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Peter Rajnoha 2013-07-30 08:16:55 UTC
Is there any error message issued at boot? (I suppose you see a timeout from systemd for that LV, right?)

Is lvmetad enabled - what's the global/use_lvmetad set to in /etc/lvm/lvm.conf? (you can also check that using "lvm dumpconfig global/use_lvmetad").

Please, attach the output of:

  systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service lvm2-activation-early.service lvm2-activation.service.

Also the output of lsblk command and the content of /etc/fstab file. Thanks.

Comment 2 Pietpiet 2013-07-30 08:39:19 UTC
The information you asked for:

Yes there is a timeout error at boot: 

jul 30 10:29:18 bakkie systemd[1]: Job dev-disk-by\x2dlabel-publiek.device/start timed out.
jul 30 10:29:18 bakkie systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-publiek.device.
-- Subject: Unit dev-disk-by\x2dlabel-publiek.device has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/be02cf6855d2428ba40df7e9d022f03d
-- 
-- Unit dev-disk-by\x2dlabel-publiek.device has failed.
-- 
-- The result is timeout.
jul 30 10:29:18 bakkie systemd[1]: Dependency failed for /home/iedereen/publiek.
-- Subject: Unit home-iedereen-publiek.mount has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/be02cf6855d2428ba40df7e9d022f03d
-- 
-- Unit home-iedereen-publiek.mount has failed.
-- 
-- The result is dependency.

$ lvm dumpconfig global/use_lvmetad
use_lvmetad=1

$ systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service lvm2-activation.service lvm2-activation-early.service
lvm2-lvmetad.socket - LVM2 metadata daemon socket
       Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; enabled)
       Active: active (running) since ma 2013-07-29 17:37:26 CEST; 16h ago
         Docs: man:lvmetad(8)
               man:lvmetad(8)
       Listen: /run/lvm/lvmetad.socket (Stream)


lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled)
   Active: active (running) since ma 2013-07-29 17:37:27 CEST; 16h ago
     Docs: man:lvmetad(8)
  Process: 191 ExecStart=/usr/sbin/lvmetad (code=exited, status=0/SUCCESS)
 Main PID: 192 (lvmetad)
   CGroup: name=systemd:/system/lvm2-lvmetad.service
           └─192 /usr/sbin/lvmetad


lvm2-activation.service
   Loaded: error (Reason: No such file or directory)
   Active: inactive (dead)


lvm2-activation-early.service
   Loaded: error (Reason: No such file or directory)
   Active: inactive (dead)

Before activation:
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0 74,5G  0 disk 
├─sda1   8:1    0  500M  0 part /boot
├─sda2   8:2    0   23G  0 part /home
├─sda3   8:3    0  3,9G  0 part [SWAP]
├─sda4   8:4    0    1K  0 part 
└─sda5   8:5    0 47,1G  0 part /
sdb      8:16   0  1,4T  0 disk 
sdc      8:32   0  1,4T  0 disk 

After activation:
$ lsblk 
NAME           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda              8:0    0 74,5G  0 disk 
├─sda1           8:1    0  500M  0 part /boot
├─sda2           8:2    0   23G  0 part /home
├─sda3           8:3    0  3,9G  0 part [SWAP]
├─sda4           8:4    0    1K  0 part 
└─sda5           8:5    0 47,1G  0 part /
sdb              8:16   0  1,4T  0 disk 
├─data-prive   253:0    0  500G  0 lvm  
└─data-publiek 253:1    0  2,2T  0 lvm  
sdc              8:32   0  1,4T  0 disk 
└─data-publiek 253:1    0  2,2T  0 lvm  

The mount option is not in the /etc/fstab file, but in a systemd home-iedereen-publiek.mount file shown below:

[Unit]
Description=/home/iedereen/publiek

[Mount]
What=/dev/disk/by-label/publiek
Where=/home/iedereen/publiek
DirectoryMode=666

Comment 3 Peter Rajnoha 2013-07-30 12:14:27 UTC
OK, just to make sure - if it fails and you need to manually run vgchange -ay, then running "systemctl start home-iedereen-publiek.mount" mounts the volume correctly, right?

Could you please try running systemd with debug logging enabled, just add this to your kernel command line (assuming grub is used):

  linux /vmlinuz .... systemd.log_level=debug systemd.log_target=jorunal-or-kmsg

And then try to get the log and please attach it here:

  journalctl --this-boot

We should see the exact sequence of how the services are started with all the dependencies...

Comment 4 Pietpiet 2013-07-30 14:14:39 UTC
(In reply to Peter Rajnoha from comment #3)
> OK, just to make sure - if it fails and you need to manually run vgchange
> -ay, then running "systemctl start home-iedereen-publiek.mount" mounts the
> volume correctly, right?

Correct, if I try to run "systemctl start home...mount" before running vgchange -ay it fails with:

A dependency job for home-iedereen-publiek.mount failed. See 'journalctl -xn' for details.

> Could you please try running systemd with debug logging enabled, just add
> this to your kernel command line (assuming grub is used):
> 
>   linux /vmlinuz .... systemd.log_level=debug
> systemd.log_target=jorunal-or-kmsg
> 
> And then try to get the log and please attach it here:
> 
>   journalctl --this-boot
> 
> We should see the exact sequence of how the services are started with all
> the dependencies...

See the attachment for the bootlog.

Comment 5 Pietpiet 2013-07-30 14:15:51 UTC
Created attachment 780694 [details]
logfile

Comment 6 Richard Tresidder 2013-08-31 11:10:20 UTC
Hi
  I'm having the same sort of problem I believe.
Just installed fresh Fedora 19, and I'm having trouble bringing an LVM device online that I previously created on Fedora 14.
During boot I get dropped to the emergency console, and a dump of the journal gives :
**************
systemd[1]: Job dev-mapper-vg_richos_movies\x2dLogVolMovies.device/start timed out.
systemd[1]: Job dev-mapper-vg_richos_movies\x2dLogVolMovies.device/start finished, result=ti
systemd[1]: Timed out waiting for device dev-mapper-vg_richos_movies\x2dLogVolMovies.device.
systemd[1]: Job movies.mount/start finished, result=dependency
systemd[1]: Dependency failed for /movies.
systemd[1]: Job local-fs.target/start finished, result=dependency
systemd[1]: Dependency failed for Local File Systems.
Job fedora-autorelabel.service/start finished, result=dependency
Dependency failed for Relabel all filesystems, if necessary.
Closed jobs progress timerfd.
systemd[1]: Job fedora-autorelabel-mark.service/start finished, result=dependency
systemd[1]: Dependency failed for Mark the need to relabel after reboot.
systemd[1]: Triggering OnFailure= dependencies of local-fs.target.
systemd[1]: Trying to enqueue job emergency.target/start/replace
**************

Upon logging into the emergency console I can perform a vgchange -ay which brings all the volume groups online, and subsequently the lv's are mounted.
Then doing systemctl default the bootup process works..

Now one minor thing is that I have the VG for this on an md mirror raid that currently only has one drive, the other was set as missing during the creation of the array. I just wanted the option to add a drive later. I don't know why this should stop a boot, there are no errors reported regarding the array..

I'm not sure how to get more info out of what caused the timeout..
I've included  systemd.log_level=debug systemd.log_target=kmsg log_buf_len=1M
to the kernel parameters, but nothing else appears relevant..
The system is fully up to date as of this report.

I've tried to include everything previously asked for, not sure what else??
Thanks
  Richard
*************
cat /proc/mdstat
Personalities : [raid1] 
md2 : active raid1 sdd1[1]
      3907016383 blocks super 1.2 [2/1] [_U]
      
md1 : active raid1 sdc1[1] sdb1[0]
      1953382272 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

*************
lvm dumpconfig global/use_lvmetad
use_lvmetad=1

*************
systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service lvm2-activation.service lvm2-activation-early.service
lvm2-lvmetad.socket - LVM2 metadata daemon socket
       Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; enabled)
       Active: active (running) since Sat 2013-08-31 16:40:57 WST; 2h 11min ago
         Docs: man:lvmetad(8)
               man:lvmetad(8)
       Listen: /run/lvm/lvmetad.socket (Stream)


lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled)
   Active: active (running) since Sat 2013-08-31 17:50:16 WST; 1h 2min ago
     Docs: man:lvmetad(8)
 Main PID: 2522 (lvmetad)
   CGroup: name=systemd:/system/lvm2-lvmetad.service
           └─2522 /usr/sbin/lvmetad


lvm2-activation.service
   Loaded: error (Reason: No such file or directory)
   Active: inactive (dead)


lvm2-activation-early.service
   Loaded: error (Reason: No such file or directory)
   Active: inactive (dead)

*************
lsblk
NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                   8:0    0 558.9G  0 disk  
├─sda1                                8:1    0   500M  0 part  /boot
├─sda2                                8:2    0   3.7G  0 part  [SWAP]
└─sda3                                8:3    0 400.4G  0 part  
  ├─fedora_server-root              253:0    0 195.3G  0 lvm   /
  ├─fedora_server-home              253:3    0   9.8G  0 lvm   
  └─fedora_server-var               253:4    0 195.3G  0 lvm   /var
sdb                                   8:16   0   1.8T  0 disk  
└─sdb1                                8:17   0   1.8T  0 part  
  └─md1                               9:1    0   1.8T  0 raid1 
    └─vg_richos_home-lv_home        253:2    0   1.8T  0 lvm   /home
sdc                                   8:32   0   1.8T  0 disk  
└─sdc1                                8:33   0   1.8T  0 part  
  └─md1                               9:1    0   1.8T  0 raid1 
    └─vg_richos_home-lv_home        253:2    0   1.8T  0 lvm   /home
sdd                                   8:48   0   3.7T  0 disk  
└─sdd1                                8:49   0   3.7T  0 part  
  └─md2                               9:2    0   3.7T  0 raid1 
    └─vg_richos_movies-LogVolMovies 253:1    0   3.7T  0 lvm   /movies
sr0                                  11:0    1  1024M  0 rom   

****************
fstab. Note I created a new md mirror raid using Fedora 19 ad then created a new home volume, which I mounted.. This ones seems to happily get going without having to do anything special. It is only the movies one that fails to mount at boot.
/dev/mapper/fedora_server-root /                     ext4    defaults        1 1
UUID=cab593a0-4b31-47d4-9b00-85407420d8f5 /boot      ext4    defaults        1 2
#/dev/mapper/fedora_server-home /home                ext4    defaults        1 2
/dev/mapper/fedora_server-var /var                   ext4    defaults        1 2
UUID=4f2eab27-3ab8-4419-bee7-b6d06c3a24e9 swap       swap    defaults        0 0
/dev/mapper/vg_richos_movies-LogVolMovies /movies    ext4    defaults        1 2
/dev/mapper/vg_richos_home-lv_home      /home        ext4    defaults        1 2

*****************

pvdisplay --verbose
    Scanning for physical volume names
  No device found for PV zzV9Ku-ZkP9-pxVF-mVAN-t65V-EaYQ-4w6s6V.
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg_richos_movies
  PV Size               3.64 TiB / not usable 1.69 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              953861
  Free PE               0
  Allocated PE          953861
  PV UUID               8cY759-j3Ax-35kB-PGGF-OjCC-Uuqp-lRjZB9
   
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               fedora_server
  PV Size               400.39 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              102500
  Free PE               0
  Allocated PE          102500
  PV UUID               AIPVTw-BF8S-i3DC-ZeVX-aW3z-dFsc-U2xfTu
   
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               vg_richos_home
  PV Size               1.82 TiB / not usable 3.88 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476899
  Free PE               0
  Allocated PE          476899
  PV UUID               NY0ugJ-D7uK-zvbR-vTOT-LhhZ-nTN3-qF19KE


******************
vgdisplay --verbose
    Finding all volume groups
    Finding volume group "fedora_server"
  --- Volume group ---
  VG Name               fedora_server
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               400.39 GiB
  PE Size               4.00 MiB
  Total PE              102500
  Alloc PE / Size       102500 / 400.39 GiB
  Free  PE / Size       0 / 0   
  VG UUID               4QyisA-KHz3-Mcs6-Jmok-OgCX-a5Jw-b8n3j2
   
  --- Logical volume ---
  LV Path                /dev/fedora_server/root
  LV Name                root
  VG Name                fedora_server
  LV UUID                euZmcl-ZUDx-bwte-VRt5-bVS1-8GYp-koAZTg
  LV Write Access        read/write
  LV Creation host, time Server, 2013-08-25 15:22:44 +0800
  LV Status              available
  # open                 1
  LV Size                195.31 GiB
  Current LE             50000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/fedora_server/home
  LV Name                home
  VG Name                fedora_server
  LV UUID                sr2Yff-pqCa-eZln-TJuq-Fb07-msge-Jivy3b
  LV Write Access        read/write
  LV Creation host, time Server, 2013-08-25 15:22:50 +0800
  LV Status              available
  # open                 0
  LV Size                9.77 GiB
  Current LE             2500
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/fedora_server/var
  LV Name                var
  VG Name                fedora_server
  LV UUID                SMsr5k-YXMu-AUHy-bamE-mITp-5PDG-WCNUWT
  LV Write Access        read/write
  LV Creation host, time Server, 2013-08-25 15:22:51 +0800
  LV Status              available
  # open                 1
  LV Size                195.31 GiB
  Current LE             50000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
   
  --- Physical volumes ---
  PV Name               /dev/sda3     
  PV UUID               AIPVTw-BF8S-i3DC-ZeVX-aW3z-dFsc-U2xfTu
  PV Status             allocatable
  Total PE / Free PE    102500 / 0
   
    Finding volume group "vg_richos_movies"
  --- Volume group ---
  VG Name               vg_richos_movies
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                256
  Cur LV                1
  Open LV               1
  Max PV                256
  Cur PV                1
  Act PV                1
  VG Size               3.64 TiB
  PE Size               4.00 MiB
  Total PE              953861
  Alloc PE / Size       953861 / 3.64 TiB
  Free  PE / Size       0 / 0   
  VG UUID               hlzUxf-huoV-UI4K-B3GC-e34d-syFt-xcOerp
   
  --- Logical volume ---
  LV Path                /dev/vg_richos_movies/LogVolMovies
  LV Name                LogVolMovies
  VG Name                vg_richos_movies
  LV UUID                Db18cq-nTj6-9vJ6-Wba4-AiHL-hddl-52xIP8
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                3.64 TiB
  Current LE             953861
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Physical volumes ---
  PV Name               /dev/md2     
  PV UUID               8cY759-j3Ax-35kB-PGGF-OjCC-Uuqp-lRjZB9
  PV Status             allocatable
  Total PE / Free PE    953861 / 0
   
    Finding volume group "vg_richos_home"
  --- Volume group ---
  VG Name               vg_richos_home
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                256
  Cur LV                1
  Open LV               1
  Max PV                256
  Cur PV                1
  Act PV                1
  VG Size               1.82 TiB
  PE Size               4.00 MiB
  Total PE              476899
  Alloc PE / Size       476899 / 1.82 TiB
  Free  PE / Size       0 / 0   
  VG UUID               0q1PDS-DgPR-LcGu-wac2-W0jA-ncru-RZSI4Y
   
  --- Logical volume ---
  LV Path                /dev/vg_richos_home/lv_home
  LV Name                lv_home
  VG Name                vg_richos_home
  LV UUID                LNZxRF-jZca-LncW-mqKE-iVBw-vuUm-QodU9u
  LV Write Access        read/write
  LV Creation host, time *****.******.au, 2013-08-29 19:50:58 +0800
  LV Status              available
  # open                 1
  LV Size                1.82 TiB
  Current LE             476899
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Physical volumes ---
  PV Name               /dev/md1     
  PV UUID               NY0ugJ-D7uK-zvbR-vTOT-LhhZ-nTN3-qF19KE
  PV Status             allocatable
  Total PE / Free PE    476899 / 0

***************************

Comment 7 Richard Tresidder 2013-09-01 08:25:35 UTC
ok I think I worked out my problem..
I noticed that there was an unknown device when using pvscan etc with a PV uid of : zzV9Ku-ZkP9-pxVF-mVAN-t65V-EaYQ-4w6s6V
strange..
So i did a dd of /dev/sdd1 and low and behold the second block had an lvm pv id..
Also there was some other stale garbage at around the 10th block right after the raid block info I guess?? (whole pile of FE FFs). anyway I zero'd this dud pv entry out and and the additional junk and all was well again..
when I set this disk up originally I placed the pv straight onto the 1st partition and initted it etc, but then changed my mind and went back and placed the raid on the first partition and the pv ontop..
Still buggered if I can spot anything in the boot logs that infers an issue that would point to something like this.
I've had a bit of a nightmare previously with removing these id's from disks that I wanted to re-purpose.. Is there a good tool I should be using to examine / clean disks of LVM labels etc? previous searches for methods seemed to be a bit long and difficult.

Cheers
  Richard

Comment 8 Lubomir Bulej 2013-09-12 16:30:36 UTC
I have the same problem as the original poster.

The "start job for device" (what a terrible name) eventually times out and I get to enter the root password. After loggin in as root, I just run "vgchange -ay" and Ctrl-D to log out and resume the boot process, which then safely gets to gdm login scree.

On a laptop, this is slightly lesser pain, because I only suspend the machine unless I want a new kernel. But it makes me extremely afraid to reboot a remote server.

Comment 9 Peter Rajnoha 2013-09-17 09:44:17 UTC
Please, try adding this to the kernel cmd line:
  udev.children-max=10000

I suspect this might be caused by new limit introduced for number of udev processes. If that's the case, we have a fix for that already - I'd update lvm2 in F19 then to include a fix.

Comment 10 Lubomir Bulej 2013-09-17 11:53:26 UTC
The option did not help, the boot still gets stuck on the start job for device.
I'm attaching the info you wanted from the original poster.

Interestingly, the root filesystem gets activated, but the /home filesystem does not (I've tried removing the label and using /dev/vg0/usr instead, but to no avail).

The lvmetad does not seem to be running, but the lvm configuration says to use it. I have never touched the lvm configuration, and wasn't aware of lvmetad existence at all. I upgraded from fedora 18 with fedup, so for all I know, the systemd setup for lvm might be slightly off.

--------------------------------------------------------------------------------
$lvm dumpconfig global/use_lvmetad

  WARNING: Failed to connect to lvmetad: No such file or directory. Falling back to internal scanning.
use_lvmetad=1

--------------------------------------------------------------------------------
$systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service lvm2-activation-early.service lvm2-activation.service

lvm2-lvmetad.socket - LVM2 metadata daemon socket
       Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; disabled)
       Active: inactive (dead)
         Docs: man:lvmetad(8)
               man:lvmetad(8)
       Listen: /run/lvm/lvmetad.socket (Stream)

Sep 17 13:28:15 irian.ms.mff.cuni.cz systemd[1]: Starting LVM2 metadata daemon socket.
Sep 17 13:28:15 irian.ms.mff.cuni.cz systemd[1]: Listening on LVM2 metadata daemon socket.

lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled)
   Active: inactive (dead)
     Docs: man:lvmetad(8)

Sep 17 13:28:15 irian.ms.mff.cuni.cz systemd[1]: Starting LVM2 metadata daemon...
Sep 17 13:28:15 irian.ms.mff.cuni.cz systemd[1]: Started LVM2 metadata daemon.
Sep 17 13:32:12 irian.ms.mff.cuni.cz systemd[1]: Stopping LVM2 metadata daemon...
Sep 17 13:32:12 irian.ms.mff.cuni.cz systemd[1]: Stopped LVM2 metadata daemon.

lvm2-activation-early.service
   Loaded: error (Reason: No such file or directory)
   Active: inactive (dead)


lvm2-activation.service
   Loaded: error (Reason: No such file or directory)
   Active: inactive (dead)

--------------------------------------------------------------------------------
$lsblk (pre- vgchange)

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0 223.6G  0 disk 
├─sda1        8:1    0   508M  0 part /boot
├─sda2        8:2    0    24G  0 part 
├─sda3        8:3    0    16G  0 part 
└─sda4        8:4    0 183.1G  0 part 
  └─vg0-lnx 253:0    0    32G  0 lvm  /
sdb           8:16   0 232.9G  0 disk 
└─sdb1        8:17   0 232.9G  0 part 

--------------------------------------------------------------------------------
$lsblk (post- vgchange)

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0 223.6G  0 disk 
├─sda1        8:1    0   508M  0 part /boot
├─sda2        8:2    0    24G  0 part 
├─sda3        8:3    0    16G  0 part 
└─sda4        8:4    0 183.1G  0 part 
  ├─vg0-lnx 253:0    0    32G  0 lvm  /
  └─vg0-usr 253:1    0   128G  0 lvm  /home
sdb           8:16   0 232.9G  0 disk 
└─sdb1        8:17   0 232.9G  0 part 

--------------------------------------------------------------------------------
$cat /etc/fstab

LABEL=IRIAN2-LNX	/		ext4	defaults,noatime,nodiratime,discard,commit=600,acl,user_xattr			1 1
LABEL=IRIAN2-USR	/home		ext4	defaults,noatime,nodiratime,discard,commit=600,barrier=0,journal_async_commit	1 2
LABEL=IRIAN2-SYS	/boot		ext2	defaults,noatime								1 3

--------------------------------------------------------------------------------
$journalctl -xn

-- Logs begin at Tue 2013-07-16 16:51:41 CEST, end at Tue 2013-09-17 13:34:37 CEST. --
Sep 17 13:34:08 irian.ms.mff.cuni.cz systemd[1]: Mounted /boot.
-- Subject: Unit boot.mount has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit boot.mount has finished starting up.
-- 
-- The start-up result is done.
Sep 17 13:34:08 irian.ms.mff.cuni.cz systemd[1]: Startup finished in 1.720s (kernel) + 890ms (initrd) + 1min 30.238s (userspace) = 1min 32.849s.
-- Subject: System start-up is now complete
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- All system services necessary queued for starting at boot have been
-- successfully started. Note that this does not mean that the machine is
-- now idle as services might still be busy with completing start-up.
-- 
-- Kernel start-up required 1720608 microseconds.
-- 
-- Initial RAM disk start-up required 890759 microseconds.
-- 
-- Userspace start-up required 90238067 microseconds.
Sep 17 13:34:08 irian.ms.mff.cuni.cz systemd[506]: Failed at step EXEC spawning /bin/plymouth: No such file or directory
-- Subject: Process /bin/plymouth could not be executed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/641257651c1b4ec9a8624d7a40a9e1e7
-- 
-- The process /bin/plymouth could not be executed and failed.
-- 
-- The error number returned while executing this process is 2.
Sep 17 13:34:08 irian.ms.mff.cuni.cz kernel: [92B blob data]
Sep 17 13:34:08 irian.ms.mff.cuni.cz kernel: EXT4-fs (sda1): mounting ext2 file system using the ext4 subsystem
Sep 17 13:34:08 irian.ms.mff.cuni.cz kernel: EXT4-fs (sda1): mounted filesystem without journal. Opts: (null)
Sep 17 13:34:08 irian.ms.mff.cuni.cz auditctl[505]: No rules
Sep 17 13:34:08 irian.ms.mff.cuni.cz auditctl[505]: AUDIT_STATUS: enabled=0 flag=1 pid=0 rate_limit=0 backlog_limit=320 lost=0 backlog=0
Sep 17 13:34:37 irian.ms.mff.cuni.cz systemd[1]: Starting Stop Read-Ahead Data Collection...
-- Subject: Unit systemd-readahead-done.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit systemd-readahead-done.service has begun starting up.
Sep 17 13:34:37 irian.ms.mff.cuni.cz systemd[1]: Started Stop Read-Ahead Data Collection.
-- Subject: Unit systemd-readahead-done.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit systemd-readahead-done.service has finished starting up.
-- 
-- The start-up result is done.

Comment 11 Peter Rajnoha 2013-09-17 12:11:53 UTC
(In reply to Lubomir Bulej from comment #10)
> The option did not help, the boot still gets stuck on the start job for
> device.
> I'm attaching the info you wanted from the original poster.
> 
> Interestingly, the root filesystem gets activated, but the /home filesystem
> does not (I've tried removing the label and using /dev/vg0/usr instead, but
> to no avail).
> 

The LV with root fs gets activated in initramfs - there's a direct activation for this volume that dracut (the initramfs) does directly, so it does not count for "autoactivation". So that's OK. All the other LVs should be autoactivated if lvmetad is used.

> $systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service
> lvm2-activation-early.service lvm2-activation.service
> 
> lvm2-lvmetad.socket - LVM2 metadata daemon socket
>        Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; disabled)
>        Active: inactive (dead)
>          Docs: man:lvmetad(8)
>                man:lvmetad(8)
>        Listen: /run/lvm/lvmetad.socket (Stream)
> 

This should be "enabled" and "active" for proper functionality! And it should have been enabled by systemd based on the preset file during the update:

  /lib/systemd/system-preset/90-default.preset

This has a line enable lvm2-lvmetad.*.

So this failed on systemd side during the update procedure!
Please, try enabling the lvm2-lvmetad.socket by calling:

  systemctl enable lvm2-lvmetad.socket

...and then try rebooting your system. Does it help?

> lvm2-lvmetad.service - LVM2 metadata daemon
>    Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled)
>    Active: inactive (dead)
>      Docs: man:lvmetad(8)
> 

This is OK, this doesn't need to be enabled as the lvm2-lvmetad.socket instantiates the service on first socket access. Well, only if the lvm2-lvmetad.socket is properly enabled, of course...

Comment 12 Lubomir Bulej 2013-09-17 12:18:18 UTC
(In reply to Peter Rajnoha from comment #11)
> (In reply to Lubomir Bulej from comment #10)
> 
> > $systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service
> > lvm2-activation-early.service lvm2-activation.service
> > 
> > lvm2-lvmetad.socket - LVM2 metadata daemon socket
> >        Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; disabled)
> >        Active: inactive (dead)
> >          Docs: man:lvmetad(8)
> >                man:lvmetad(8)
> >        Listen: /run/lvm/lvmetad.socket (Stream)
> > 
> 
> This should be "enabled" and "active" for proper functionality! And it
> should have been enabled by systemd based on the preset file during the
> update:
> 
>   /lib/systemd/system-preset/90-default.preset
> 
> This has a line enable lvm2-lvmetad.*.

Interesting. I upgraded my two laptops via fedup, and this problem manifested on both.

> 
> So this failed on systemd side during the update procedure!
> Please, try enabling the lvm2-lvmetad.socket by calling:
> 
>   systemctl enable lvm2-lvmetad.socket
> 
> ...and then try rebooting your system. Does it help?
> 

Yes, that fixed the problem! Thanks!

Comment 13 Peter Rajnoha 2013-09-17 12:35:29 UTC
(In reply to Lubomir Bulej from comment #12)
> Interesting. I upgraded my two laptops via fedup, and this problem
> manifested on both.

Was this an upgrade from F18 or older Fedora? F18 already had the lvm2-lvmetad.socket enabled by default (as per the system preset file).

Comment 14 Lubomir Bulej 2013-09-17 12:42:17 UTC
This was from F18, but F18 was again an upgrade from F17, also via fedup...

I was actually booting F18 kernel on F19 for some time, because it worked -- I did not have time to look into the issue back then (in part due to systemd not being as debugging friendly as the old init scripts).

So the F18 initrd was somehow resistant to lvmetad.socket not being active/enabled.

Comment 15 Pietpiet 2013-09-22 14:38:53 UTC
I've had time to look at my system again but the advice given in comment 11 does not work for me. The socket is enabled, but the lvm file system is not activated

Comment 16 Sergio Pascual 2013-10-21 09:47:46 UTC
I'm suffering this after doing fedup from f19 to current f20. I'm using BIOS RAID 1 under LVM

# lvm dumpconfig global/use_lvmetad
use_lvmetad=1

# systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service lvm2-activation-early.service lvm2-activation.service
lvm2-lvmetad.socket - LVM2 metadata daemon socket
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; enabled)
   Active: active (running) since lun 2013-10-21 11:31:38 CEST; 5min ago
     Docs: man:lvmetad(8)
   Listen: /run/lvm/lvmetad.socket (Stream)


lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled)
   Active: active (running) since lun 2013-10-21 11:36:12 CEST; 27s ago
     Docs: man:lvmetad(8)
  Process: 3403 ExecStart=/usr/sbin/lvmetad (code=exited, status=0/SUCCESS)
 Main PID: 3404 (lvmetad)
   CGroup: /system.slice/lvm2-lvmetad.service
           └─3404 /usr/sbin/lvmetad

oct 21 11:36:12 myhost systemd[1]: Starting LVM2 metadata daemon...
oct 21 11:36:12 myhost systemd[1]: Started LVM2 metadata daemon.

lvm2-activation-early.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)


lvm2-activation.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)

Comment 17 Sergio Pascual 2013-10-28 13:04:13 UTC
Not working with lvm2-2.02.103-2.fc20.x86_64

Comment 18 Peter Rajnoha 2013-10-29 08:20:25 UTC
I'm sorry for the problems, I'll revisit this as soon as possible. I'm closing this one as dup of bug #1023250, please, watch that bug report for changes. Thanks.

*** This bug has been marked as a duplicate of bug 1023250 ***