Bug 2105963 - Concerning messages in CSI pods "Cannot run systemd-run, assuming non-systemd OS"
Summary: Concerning messages in CSI pods "Cannot run systemd-run, assuming non-systemd...
Keywords:
Status: CLOSED DUPLICATE of bug 2096395
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: csi-driver
Version: 4.8
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
: ---
Assignee: Niels de Vos
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-11 11:01 UTC by Miguel Blach
Modified: 2022-08-22 11:51 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-07-12 06:49:30 UTC
Embargoed:


Attachments (Terms of Use)

Description Miguel Blach 2022-07-11 11:01:32 UTC
Description of problem (please be detailed as possible and provide log
snippests):

The csi pods log message contains the following traces:
2022-07-08T12:26:11.795806762Z I0708 12:26:11.795582 3618438 mount_linux.go:198] Cannot run systemd-run, assuming non-systemd OS
2022-07-08T12:26:11.795806762Z I0708 12:26:11.795599 3618438 mount_linux.go:199] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
2022-07-08T12:26:11.795806762Z Failed to create bus connection: Host is down
2022-07-08T12:26:11.795806762Z , failed with: exit status 1
2022-07-08T12:26:11.796105554Z I0708 12:26:11.795858 3618438 server.go:131] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}

We noticed that this same issue was solved in upstream:
- https://github.com/ceph/ceph-csi/issues/2890

And there was a recent PR:
- https://github.com/ceph/ceph-csi/pull/3225

Version of all relevant components (if applicable):
- OCS 4.8

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

So far, there is no impact to OCS/ODF features and functions, but this is troublesome because so much log messages are being seeing in the logging stack which could lead to different problems in the future.

Is there any workaround available to the best of your knowledge?

None, that we are aware of.

Can this issue reproducible?

All the time in user environment.



Actual results:

- Lot of messages in CSI pods slowly making a lot of usage in the logging stack

Expected results:

- Reduced logging messages.

Additional info:
- https://github.com/ceph/ceph-csi/issues/64
- https://github.com/ceph/ceph-csi/issues/2890
- https://github.com/ceph/ceph-csi/pull/3225

Comment 1 Niels de Vos 2022-07-12 06:49:30 UTC
Hi Miguel,

This seems to be a duplicate of bug 2096395. Please continue to track that bug instead.

If reducing the logs from the NodeStageVolume procedure are not sufficient, we can investigate how to improve the Kubernetes utilities so that the NodeGetVolumeStats procedure can also prevent the logging.

Thanks,
Niels

*** This bug has been marked as a duplicate of bug 2096395 ***


Note You need to log in before you can comment on or make changes to this bug.