Bug 1725364 - local-fs target completes before local fs are mounted, resulting in failure of libvirtd & other services
Summary: local-fs target completes before local fs are mounted, resulting in failure o...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: systemd
Version: 29
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: systemd-maint
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-30 07:18 UTC by BugMasta
Modified: 2019-06-30 11:47 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-30 08:40:42 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description BugMasta 2019-06-30 07:18:21 UTC
Description of problem:

local-fs target completes before local fs are mounted, resulting in failure of libvirtd & other services

Version-Release number of selected component (if applicable):

systemd-libs-239-12.git8bca462.fc29.x86_64

How reproducible:


Steps to Reproduce:

Reproducibility will be highly dependent on time taken to mount disks, in my case, i am using a spinning disk which may still be spinning up, and the mount takes some time to complete.

1. Have entry in fstab, with auto option, like:'
/dev/disk/by-label/data-6tb             /mnt/6Tb-linux          auto    nofail,auto,noatime     0 0

2. Setup libvirt save dir on the automounted fs and make symlink like this:
/var/lib/libvirt/qemu/save -> /mnt/6Tb-linux/data/kvm/var-lib-libvirt-qemu-save

3. reboot

Actual results:
from journalctl -b:
Jun 30 14:00:26 Il-Duce systemd[1]: Reached target Local File Systems.
...
Jun 30 14:00:26 Il-Duce systemd[1]: Mounting /mnt/6Tb-linux...
...
Jun 30 14:00:27 Il-Duce systemd[1]: Started Virtualization daemon.
Jun 30 14:00:27 Il-Duce audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=libvirtd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
...
Jun 30 14:00:27 Il-Duce libvirtd[1611]: 2019-06-30 04:30:27.598+0000: 1726: error : qemuStateInitialize:758 : unable to set ownership of '/var/lib/libvirt/qemu/save' to 107:107: No such file or directory
...
Jun 30 14:00:27 Il-Duce systemd[1]: Mounted /mnt/6Tb-linux.

[root@Il-Duce 06-30 15:56:39 ~]# systemctl status local-fs.target
● local-fs.target - Local File Systems
   Loaded: loaded (/usr/lib/systemd/system/local-fs.target; static; vendor preset: disabled)
   Active: active since Sun 2019-06-30 14:00:26 ACST; 1h 56min ago


Expected results:

If libvirtd depends on local-fs, then it would be nice if systemd would actually wait for local-fs to have actually finished, before it starts libvirtd or other dependent services.


Additional info:

Why does journalctl -b say local-fs target has been reached at 14:00:26, and systemctl status say active since 14:00:26, when the journalctl clearly shows the filesystems were not mounted until later, eg 14:00:27 for /mnt/6Tb-linux.

What does it mean when a target is "reached"? Is that supposed to mean that the taret has started doing it's work? I would have expected it to mean that the target has *DONE* it's job, nt just started. If that the case there is clearly some incredible stupidity going on here, because the log says target has been reached, before the mount comnands are even issued. If it does mean the target has only just started doing its work, then there needs to be somehting in the log to show when it has actually *finished*.

It appears as if systemd local-fs target is firing off commands to mount disks, and considering the local-fs.target job done, without waiting for the disks to actually be mounted.

This is unacceptable. 

If libvirtd depends on local-fs.target, then it needs to wait until local-fs.target has *finished* its work.

local-fs.target is not done, until the filesystems are actually *mounted*.

Comment 1 BugMasta 2019-06-30 07:31:49 UTC

In addition, we have this:

[root@Il-Duce 06-30 15:29:59 ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Sun 2019-06-30 14:00:27 ACST; 1h 29min ago
     Docs: man:libvirtd(8)
           https://libvirt.org
  Process: 1611 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 1611 (code=exited, status=0/SUCCESS)
      CPU: 211ms

ie:
   Active: inactive (dead) since Sun 2019-06-30 14:00:27 ACST; 1h 29min ago

Why is it dead?

As mentioned earlier, the libvirtd.servce has this:
  After=local-fs.target
so, it should've been able to access its save directory, and start without any issue.

But the service file also has this:
  Restart=on-failure

So, why was it not restarted?
If one more attemt had been made to restart the service it would've succeeded, because by then the 6tb-linux fs would've been mounted.


Can someone please explain to me what does "Restart=on-failure" mean, if it does not mean RESTART ON FAILURE.

Comment 2 BugMasta 2019-06-30 07:59:43 UTC
It's 2019, and systemd still cannot reliably mount filesystems and start a service.

Is that too much to ask for is it?

Comment 3 Zbigniew Jędrzejewski-Szmek 2019-06-30 08:40:42 UTC
The docs for "nofail" [1] are
> With nofail, this mount will be only wanted, not required, by
> local-fs.target or remote-fs.target. Moreover the mount unit is not ordered
> before these target units. This means that the boot will continue without
> waiting for the mount unit and regardless whether the mount point can be
> mounted successfully.

Please remove nofail from fstab, and you should be good.

[1] https://www.freedesktop.org/software/systemd/man/systemd.mount.html#nofail


For the second question:
> Process: 1611 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)

As you can see, the process returned SUCCESS, i.e. not failure, i.e. there is no reason
to restart.

Comment 4 BugMasta 2019-06-30 10:15:46 UTC
I do not want to remove nofail from fstab. If i remove nofail, then the system will drop to an emergency shell if this drive cannot be mounted. That is not "good".

The option we are talking about is called "nofail", it is not called "nowait".

Please re-open this bug. The current state of affairs is unacceptable.

Comment 5 BugMasta 2019-06-30 10:25:53 UTC
If people do not wish to wait for a drive to mount, they can set a short timeout.

What option do i have, to stop my system dropping to an emergency shell if a fileystem can't be mounted? 
Only one, and it is called "nofail".

If I use nofail, it does *NOT* mean i do not want to wait for it to mount if it is there, and it's only going to take a few seconds for a disk to spin up and mount.

Comment 6 BugMasta 2019-06-30 10:30:44 UTC
Going back to a manpage from a sysv system, you knowm remember those old systems we used to have, where you could actually be sure one thing was done before something else was attempted, we have this:

  nofail  :  do not report errors for this device if it does not exist.

In my case, the device definitely DOES exist. It is attached. A sysv system would wait for it to be mounted, even with nofail.

systemd is not waiting for it.

That is a bug.

Comment 7 BugMasta 2019-06-30 10:39:11 UTC
Having to remove the nofail option, just because the semantics of nofail have been changed, is unacceptable.
It makes my system fragile, and liable to drop to an emergency shell if that disk is remove.


If that disk is removed, then the nofail option will allow boot to proceed, and then sure, libvirtd will fail also, but the rest of the system will come up.

That is the purpose of nofail. 

"nofail" is *NOT* there to prevent delays waiting for removable devices that are not there. For that, use noauto, or, add a new option "nowait", like the old "nobootwait" option that some systems had.

nofail has a specific purpose, and that is to designate a system that the system should be able to boot without. That is all. 
It does not mean we do not want to wait for that filesystem, when it is there.

Comment 8 BugMasta 2019-06-30 10:44:25 UTC
I want this bug reopened and this bug addressed.

It is unacceptable for systemd to change the behaviour of nofail, making critical systems more fragile just to pander to the convenience of incompetent users who cannot use any of the many user-space tools out there, to manage mounting and unmounting removable drives.

This is a critical issue, which affects the fundamental robustness of systems.

It is not a joke.

Comment 9 BugMasta 2019-06-30 11:09:51 UTC
Since this bug has been inappropriately closed, i have submitted bug 1725389:

https://bugzilla.redhat.com/show_bug.cgi?id=1725389


This is a serious issue and I expect that bug to be addressed, not closed without any thought.

Comment 10 BugMasta 2019-06-30 11:22:00 UTC
If anyone else comes across this issue, do not remove nofail.

Do this:

x-systemd.before=, x-systemd.after=
Configures a Before= dependency or After= between the created mount unit and another systemd unit, such as a mount unit. The argument should be a unit name or an absolute path to a mount point. This option may be specified more than once. This option is particularly useful for mount point declarations with nofail option that are mounted asynchronously but need to be mounted before or after some unit start, for example, before local-fs.target unit. See Before= and After= in systemd.unit(5) for details.

So, i can use nofail and add option: 
  x-systemd.before=local-fs.target

Once again, systemd has found a way to do the simplest thing in the most arse-backwards way imaginable.

Comment 11 BugMasta 2019-06-30 11:47:47 UTC

Re the libvirtd restart issue, 

Zbigniewx also said:

For the second question:
> Process: 1611 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)
As you can see, the process returned SUCCESS, i.e. not failure, i.e. there is no reason
to restart.

True, but since the service didn't start successfully, and journalctl logged this:

Jun 30 14:00:27 Il-Duce audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=libvirtd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 30 14:00:27 Il-Duce libvirtd[1611]: 2019-06-30 04:30:27.598+0000: 1726: info : libvirt version: 4.7.0, package: 3.fc29 (Fedora Project, 2019-05-14-18:50:20, )
Jun 30 14:00:27 Il-Duce libvirtd[1611]: 2019-06-30 04:30:27.598+0000: 1726: info : hostname: Il-Duce
Jun 30 14:00:27 Il-Duce libvirtd[1611]: 2019-06-30 04:30:27.598+0000: 1726: error : qemuStateInitialize:758 : unable to set ownership of '/var/lib/libvirt/qemu/save' to 107:107: No such file or directory
Jun 30 14:00:27 Il-Duce libvirtd[1611]: 2019-06-30 04:30:27.598+0000: 1726: error : virStateInitialize:667 : Initialization of QEMU state driver failed: unable to set ownership of '/var/lib/libvirt/qemu/save' to 107:107: No such file or directory
Jun 30 14:00:27 Il-Duce libvirtd[1611]: 2019-06-30 04:30:27.598+0000: 1726: error : daemonRunStateInit:806 : Driver state initialization failed
Jun 30 14:00:27 Il-Duce audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=libvirtd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 30 14:00:27 Il-Duce systemd[1]: libvirtd.service: Consumed 211ms CPU time

Then obviously we have a bug in libvirtd.

Maybe I'll log a bug vs libvirtd next week. I'll call it "starting libvirtd service reports success even when it has totally failed".

I can't log it now, because the stupidity behind this systemd bug tonight has already pushed ny sanity closer to the edge than I can tolerate on what should be a relaxing Sunday night.


Note You need to log in before you can comment on or make changes to this bug.