Bug 1420800 - smartd: please downgrade message when no disks found
Summary: smartd: please downgrade message when no disks found
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: smartmontools
Version: 25
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Michal Hlavinka
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-09 14:24 UTC by Zbigniew Jędrzejewski-Szmek
Modified: 2017-12-12 10:26 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-12 10:26:12 UTC
Type: Bug


Attachments (Terms of Use)

Description Zbigniew Jędrzejewski-Szmek 2017-02-09 14:24:28 UTC
Description of problem:
Currently, any VM with virtio has the following message during boot:

smartd[760]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
smartd[760]: In the system's table of devices NO devices found to scan

This sounds scary ("aborted", etc), and is logged at LOG_ERROR level. It is completely OK to have a machine with no devices suitable for smartd. It might even be prudent to check if smartd is running in a VM and then act accordingly.

Version-Release number of selected component (if applicable):
smartmontools-6.5-1.fc25.x86_64 (but this hasn't changed in earlier or later versions afaict)

How reproducible:
100%.

Expected results:
No non-actionable logs at level >= warning.

Comment 1 Michal Hlavinka 2017-02-10 15:27:44 UTC
This won't be changed.

Fedora package uses configuration file where it is specified to use autodetection. Reporting error when autodetection can't find any usable drive is correct behavior.

Changing smartmontools to detect VM and ignore autodetection specified in configuration would be ugly.

a) Your choice is not to install smartmontools if you are doing manual installation yourself, disable it after installation or change configuration file to your liking.

b) update kickstart not to install smartmontools during system installation if you are automating your vm installation

c) If there is some special product or spin, ask maintainer of that spin/product not to install smartmontools or disable smartmontools in systemd's default  presets

Comment 2 Zbigniew Jędrzejewski-Szmek 2017-02-10 16:01:00 UTC
I'm not saying that it's a bug in smartd, but it's something that still needs to be resolved. The message is annoying and confusing, and from the point of view of a user, it doesn't really matter if smartd is technically correct, but that the message doesn't appear when it isn't useful.

smartmontools is installed by default, it's part of @Standard group in comps, and it ends up in various images by default. Running standard Fedora images in VMs is pretty common, the package should be prepared for that.

If the default DEVICESCAN line is wrong, maybe it should be changed to something different. Maybe smartd itself should do something different if it's running in a VM. Maybe ConditionVirtualization=no should be added to the unit file. I don't know about your package to know what the best solution would be.

Also, note the error line "/dev/discs/disc*" — that's from devfs, which hasn't been a thing for a while.

Comment 3 Michal Hlavinka 2017-02-10 16:12:23 UTC
(In reply to Michal Hlavinka from comment #1)
> This won't be changed.

This is a configuration issue. If you don't want smartmontools running, turn it off or use a different configuration

Comment 4 Zbigniew Jędrzejewski-Szmek 2017-02-10 16:24:30 UTC
(In reply to Michal Hlavinka from comment #3)
> (In reply to Michal Hlavinka from comment #1)
> > This won't be changed.
> 
> This is a configuration issue. If you don't want smartmontools running, turn
> it off or use a different configuration

Right. Exactly. The configuration is wrong, and I'm asking you to change the configuration.

It's not my configuration. It's the DEFAULT Fedora configuration.

Once again:
1. smartmontools is in @Standard in comps
2. smartd.service is enabled by default in systemd presets

Since the daemon is installed and started by default, it should DTRT automatically. Logging a cryptic error is not that.

There's a rule for enabling services by default [https://fedoraproject.org/wiki/Packaging:DefaultServices#Locally_running_services]:

> If a service does not require manual configuration to be functional... it may be enabled by default.

If smartd is not smart enough to be enabled by default, it should be dropped from comps or presets.

Comment 5 Michal Hlavinka 2017-02-10 16:46:26 UTC
Once again. This is not a bug and it won't be changed.

Configuration is for main use case, which is running Fedora on real hw.

If you have different use case, you can use different spin/product that can have different defaults. If there is no such spin/product and you are forced to use different one, you will have to change configuration (including what packages to install) yourself.

Comment 6 Zbigniew Jędrzejewski-Szmek 2017-02-10 17:12:39 UTC
> Configuration is for main use case, which is running Fedora on real hw.

That used to be true, but people times have changed, and people run Fedora in VMs.

> If you have different use case, you can use different spin/product that can have different defaults.

Once again: smartmontools in in @standard, which is pulled in by most products and spins, including Server. I think it's pretty obvious that Server images are expected to work out of the box under virtualization. In particular, most QA tests are done this way, and people use Server on rented VMs, etc.

Comment 7 Jason Tibbitts 2017-02-15 22:53:44 UTC
I would wager that a very significant fraction of Fedora installs are in VMs these days.  Our default configuration should work just as well in a VM as it does on real hardware.

I was going to expound on how smartd is ill-suited to the modern environment where everything is hotpluggable, but then I found https://www.smartmontools.org/ticket/60 and so I guess that would be pointless.  For that case I think the udev rules should be triggering a reload of smartd on device hotplug.

And in lieu of smartd being intelligent enough to enumerate devices the proper way (via udev)m it could at least grow a modern configuration file which can include things from a directory, so udev rules could trigger the generation of appropriate config snippets.

But in the end, smartd is just really terrible when it comes to this kind of thing and even the obvious workarounds are hampered by its '90s design.  But even considering that, continuing the situation where it just spews failure notices to the logs in the default case is really poor policy.

Comment 8 l.skywalker 2017-03-12 17:41:17 UTC
It does the same thing if there are only NVM disks...


So it's not only when running in virtual stuff.

Mar 12 09:55:41 localhost.localdomain smartd[1072]: smartd 6.5 2016-05-07 r4318 [x86_64-linux-4.9.13-201.fc25.x86_64] (local build)
Mar 12 09:55:41 localhost.localdomain smartd[1072]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Mar 12 09:55:41 localhost.localdomain smartd[1072]: Opened configuration file /etc/smartmontools/smartd.conf
Mar 12 09:55:41 localhost.localdomain smartd[1072]: Configuration file /etc/smartmontools/smartd.conf was parsed, found DEVICESCAN, scanning devices
Mar 12 09:55:41 localhost.localdomain smartd[1072]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Mar 12 09:55:41 localhost.localdomain smartd[1072]: In the system's table of devices NO devices found to scan
Mar 12 09:55:41 localhost.localdomain smartd[1072]: Monitoring 0 ATA/SATA, 0 SCSI/SAS and 0 NVMe devices

With
ls -l /dev/disk/by-id
[...]lrwxrwxrwx. 1 root root 13 Mar 12 10:37 nvme-eui.002538bb61b52bba -> ../../nvme0n1
lrwxrwxrwx. 1 root root 15 Mar 12 10:37 nvme-eui.002538bb61b52bba-part1 -> ../../nvme0n1p1
lrwxrwxrwx. 1 root root 15 Mar 12 10:37 nvme-eui.002538bb61b52bba-part2 -> ../../nvme0n1p2
lrwxrwxrwx. 1 root root 15 Mar 12 10:37 nvme-eui.002538bb61b52bba-part3 -> ../../nvme0n1p3

ls -l /dev/nmv*
crw-------. 1 root root 246, 0 Mar 12 09:55 /dev/nvme0
brw-rw----. 1 root disk 259, 0 Mar 12 10:37 /dev/nvme0n1
brw-rw----. 1 root disk 259, 1 Mar 12 10:37 /dev/nvme0n1p1
brw-rw----. 1 root disk 259, 2 Mar 12 10:37 /dev/nvme0n1p2
brw-rw----. 1 root disk 259, 3 Mar 12 10:37 /dev/nvme0n1p3

Comment 9 Fedora End Of Life 2017-11-16 18:51:49 UTC
This message is a reminder that Fedora 25 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 25. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '25'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 25 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 10 Fedora End Of Life 2017-12-12 10:26:12 UTC
Fedora 25 changed to end-of-life (EOL) status on 2017-12-12. Fedora 25 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.