# setenforce permissive # systemctl start mdmonitor.service Job failed, see logs for details. Aug 11 22:15:02 kernel: init[1]: mdmonitor.service: control process exited, code=exited status=6 Aug 11 22:15:02 arekh init[1]: mdmonitor.service: control process exited, code=exited status=6 Aug 11 22:15:02 init[1]: Unit mdmonitor.service entered maintenance state. Aug 11 22:15:02 kernel: init[1]: Unit mdmonitor.service entered maintenance state. mdadm-3.1.3-0.git20100804.2.fc14.x86_64 systemd-7-1.fc14.x86_64
So the service failed. But that's not a problem with systemd, that's just what mdonitor returned when it was started. I see no problem here. And should there be one, then it is in mdmonitor, not systemd. I think it would be smart if mdmonitor would be fixed to not start as part of the normal bootup, but by being pulled in via a device dependency from a udev/sysfs device. For example, something like this should do the job: SUBSYSTEM=="block", KERNEL=="md*", TAG="systemd", ENV{SYSTEMD_WANTS}="mdmonitor.service" If mdmonitor was started like that it would be activated only if an actual md device is around. That's of course good because we'd have to start less on non-md systems, the failed service wouldn't appear anymore in systemctl and even better simply configuring an md device will magically start mdmonitor. Reassigning to mdmonitor.
Note that recent systemd versions will consider exit code 6 from LSB services OK, and will not mark them as failed. The main isue here goes away thus. The suggestion from comment 1 still apply however. That siad, I'll take the liberty now to remove the F14target from this, since the main issue is gone by a systemd upload.
(In reply to comment #1) > So the service failed. But that's not a problem with systemd, that's just what > mdonitor returned when it was started. The problem with this analysis is that the system the problem was reported on *has* a md raid to monitor (two in fact one for /boot and the other for everything else)
Having md arrays is not enough. Does it have /etc/mdadm.conf? And are variables "MAILADDR" or "PROGRAM" set in the file? If not, mdmonitor has nothing to do and refuses to start.
Indeed, if the /etc/mdadm.conf is not properly populated, the service will not start even if there are md raid arrays to monitor. Can you please verify if the service starts properly given a proper mdadm.conf file?
Closing due to inactivity and the fact that it appears to be resolved suitably anyway.