RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1781192 - lm_sensors service should not fail in VM environment
Summary: lm_sensors service should not fail in VM environment
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lm_sensors
Version: 8.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 8.0
Assignee: aegorenk
QA Contact: Jeff Bastian
URL:
Whiteboard:
: 1921633 (view as bug list)
Depends On: 1937989
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-09 14:12 UTC by Alena
Modified: 2024-03-25 15:33 UTC (History)
12 users (show)

Fixed In Version: 3.4.0-23.20180522git70f7e08
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-09 18:55:50 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:4353 0 None None None 2021-11-09 18:55:52 UTC

Internal Links: 1921633

Description Alena 2019-12-09 14:12:14 UTC
1. Proposed title of this feature request

     lm_sensor should have ConditionVirtualization=!vm

3. What is the nature and description of the request?

   lm_sensor makes no sense in a VM, so running it provides pointless resource consumption.

     Add this to the systemd unit file :
      ConditionVirtualization=!vm

4. Why does the customer need this? (List the business requirements here)

5. How would the customer like to achieve this? (List the functional requirements here)

     Add this to the systemd unit file :
      ConditionVirtualization=!vm

6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.

7. Is there already an existing RFE upstream or in Red Hat Bugzilla?
     No

8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?

    RHEL 8.2

9. Is the sales team involved in this request and do they have any additional input?

     No

10. List any affected packages or components.


11. Would the customer be able to assist in testing this functionality if implemented?

     No

Comment 1 Ondřej Lysoněk 2019-12-13 14:53:36 UTC
(In reply to Alena from comment #0)
>    lm_sensor makes no sense in a VM, so running it provides pointless
> resource consumption.

It's not as simple as that. lm_sensors can still be useful inside a virtual machine, namely if you setup passthrough of some of the host devices.

For instance I can imagine a scenario where a user has set up a virtual machine with GPU passthrough and they monitor the GPU temperature from within the VM.

The lm_sensors service is a oneshot service (i.e., it's not a daemon) and it runs just a couple of quick commands. Also, the service can always be disabled using systemctl, or lm_sensors can be uninstalled. Why does the customer have lm_sensors installed in the first place?

I'm inclined to close this WONTFIX. It seems like a potentially breaking change with little benefit.

Comment 2 Ondřej Lysoněk 2020-01-09 09:36:04 UTC
Closing per comment#1.

Comment 3 Christian Horn 2021-01-29 00:21:10 UTC
Can we please revisit this?

Coming here from bz1921633, where:
- we have pcp-pmda-lmsensors, which is recording sensor/fan readings into
  PCP archive files
- it relies on lmsensors, so we introduced a dependency on lmsensors
- which brought lm_sensors.service also to many virtual guests, where the
  unit is run on startup and producing errors.

microcode_ctl.service is a bit similar: it can not update microcode from
virtual guests.  ConditionVirtualization=false solves the issue there
nicely.

So the issue we want to solve in requesting ConditionVirtualization=false :
---
[root@rhel8u3a ~]# systemctl status lm_sensors
● lm_sensors.service - Hardware Monitoring Sensors
   Loaded: loaded (/usr/lib/systemd/system/lm_sensors.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2021-01-28 23:53:00 JST; 9h ago
  Process: 716 ExecStart=/usr/bin/sensors -s (code=exited, status=1/FAILURE)
  Process: 703 ExecStart=/usr/libexec/lm_sensors/lm_sensors-modprobe-wrapper $BUS_MODULES $HWMON_MODULES (code=exited, status=1/FAILURE)
 Main PID: 716 (code=exited, status=1/FAILURE)

Jan 28 23:52:59 rhel8u3a.local systemd[1]: Starting Hardware Monitoring Sensors...
Jan 28 23:52:59 rhel8u3a.local lm_sensors-modprobe-wrapper[703]: No sensors with loadable kernel modules configured.
Jan 28 23:52:59 rhel8u3a.local lm_sensors-modprobe-wrapper[703]: Please, run 'sensors-detect' as root in order to search for available sensors.
Jan 28 23:52:59 rhel8u3a.local sensors[716]: No sensors found!
Jan 28 23:52:59 rhel8u3a.local sensors[716]: Make sure you loaded all the kernel drivers you need.
Jan 28 23:52:59 rhel8u3a.local sensors[716]: Try sensors-detect to find out which these are.
Jan 28 23:53:00 rhel8u3a.local systemd[1]: lm_sensors.service: Main process exited, code=exited, status=1/FAILURE
Jan 28 23:53:00 rhel8u3a.local systemd[1]: lm_sensors.service: Failed with result 'exit-code'.
Jan 28 23:53:00 rhel8u3a.local systemd[1]: Failed to start Hardware Monitoring Sensors.
---

We also have many customers who have a single kickstart file/package
selection, which they deploy on both physical and virtual guests.
For them, lm_sensors.service dealing better with virtual guests would
also be good.

I agree with Ondrej that in some cases lmsensors.service might be
desired in virtual guests.  Could we then deploy for the majority
of deployments (with ConditionVirtualization=false), and have the
admin customize the service for that use case case?
Customization of systemd service units is well proven and documented.

Comment 7 aegorenk 2021-02-11 10:29:08 UTC
*** Bug 1921633 has been marked as a duplicate of this bug. ***

Comment 8 aegorenk 2021-02-14 15:59:21 UTC
Hi Christian, the solution I would like to propose is to introduce a flag, which will allow sensors to return 0 on no sensors detected.
However I would still print message in stderr that says:

No sensors found!
Make sure you loaded all the kernel drivers you need.
Try sensors-detect to find out which these are.

Will this solution work for customers?

I would like to leave this message, because it'll be a good debug point for people who actually want to run lm_sensors on VMs, but since it'll never fail it'll be hard for them to notice the problem without this message in log.

Comment 9 Christian Horn 2021-02-15 05:31:18 UTC
Hoi Artem,

>  I would like to propose is to introduce a flag, which will allow sensors to return 0 on no sensors detected.

With that, the service would lm_sensors.service not fail on guests, 
that can potentially solve the bz1921633 situation.
Where would you want to set the flag, what would be the default setting?
I would hope for that flag to be set by default.  In other words, an
admin of RHEL in virtual guests should not have to set this themself.

There might be situations when the customers would want to
see lm_sensors.service failing.

> However I would still print message in stderr that says:
>
> No sensors found!
> Make sure you loaded all the kernel drivers you need.
> Try sensors-detect to find out which these are.
>
> Will this solution work for customers?

Would modifying that need upstream work?
Are you open to tune that, so admins of virtual guests understand
that they might see the desired outcome?

> I would like to leave this message, because it'll be a good debug point
> for people who actually want to run lm_sensors on VMs, but since
> it'll never fail it'll be hard for them to notice the problem without
> this message in log.

That makes sense.
But then, we start relying on an error message being read by someone.
The error code is normally the more direct way to tell if something fails,
and triggering an admin to search for details.
Let me understand your flag idea first better.

Comment 10 aegorenk 2021-02-22 11:44:38 UTC
> Let me understand your flag idea first better.

Idea is that when no sensors found the message will be printed in the error log (same as now) in any case. In case if the flag set the exit status will be 0 (1 in case flag is not set).
After introduction of that in code I'll modify the way systemd starts the lm_sensors service.

The idea is to create an init service for lm_sensors which will have ConditionVirtualization=vm (this init service will actually start only on virtual machines)
The init service will add the flag in file (?), which will be used by a regular lm_sensors service to get the startup flags.

In a VM environment the init service will be started and flag will be set. In a regular environment the service will be not started and no flag will be used.

I'm not sure what is the best way to implement the systemd layer of the solution, I'll do a research how it might be done and if this is done in some other package.

Does that make sense?

Comment 11 Christian Horn 2021-02-24 09:16:25 UTC
Summing up my understanding of the practical outcome if implemented,
on a RHEL deployment in the guest with lmsensors installed without 
manually touching files to set flags:

- on physical:
  - if sensors found, return 0
  - if sensors not found, return 1 + text
- on virtual:
  - if sensors found, return 0
  - if sensors not fount, return 0 + text

That should technically solve the current biggest issue,
the failing systemd unit.  
Following remarks:

- That solution frees users who to use lmsensors in the guest from
  having to create a modified systemd unit.
  I still think wonder if that additional complexity is wort it,
  as a small minority wants lmsensors in guests.  The complexity
  comes with danger of regressions.
- With the planned solution, virt-guest users do no longer see
  a failing systemd-service, that is good.

  The message in the logs still suggests the outcome might be bad:
  sensors[716]: No sensors found!
  sensors[716]: Make sure you loaded all the kernel drivers you need.
  sensors[716]: Try sensors-detect to find out which these are.

  Is that from us or from upstream?
  Ideally, we could modify it so it informs that the outcome
  of not finding sensors might be the expected outcome, which it
  is for 99% of virtual guests.

Comment 12 Christian Horn 2021-02-24 09:21:32 UTC
Maybe the use case of the solution you have in mind might also
deserve a new systemd unit option, like "MakeServiceNeverFail"
or such.
One implementation might me to have lmsensors.service run a wrapper,
which internally checks if it runs on virtual, and then handles
the return codes appropriately.  But having things run directly
run by systemd might be cleanest.

Actually, the whole topic might also be relevant for upstream,
not sure how active upstream is.

Comment 13 aegorenk 2021-02-25 13:05:02 UTC
Upstream patch:
https://github.com/lm-sensors/lm-sensors/commit/a0ef84f6583dbf427ff5a3534528e1e72bd00137

Fedora patch:
https://src.fedoraproject.org/rpms/lm_sensors/c/a2bee3abb72af537635fcb94b51f652ca7684f5c?branch=rawhide

Christian, your understanding for the behavior is correct.

The solution is to start service using wrapper, which will determine startup flags using the /usr/bin/systemd-detect-virt

I'll create the same patch for RHEL.

Comment 33 errata-xmlrpc 2021-11-09 18:55:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lm_sensors bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4353


Note You need to log in before you can comment on or make changes to this bug.