Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionJan Pazdziora (Red Hat)
2021-11-25 22:17:25 UTC
Description of problem:
Sometimes, journal contains
systemd-udevd[392]: 0.0.0601: Process 'ccw_init' failed with exit code 1.
message. Exit code 1 sounds like a problem but there is no information about the reason for the failure or how to fix it.
Version-Release number of selected component (if applicable):
s390utils-core-2.17.0-4.el9.s390x
How reproducible:
Not deterministic.
Steps to Reproduce:
1. Provisiong RHEL 9 s390x on z/VM with kernel command line parameters
rd.dasd=0.0.0120 rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0
2. After the system installs and boots the OS, run
journalctl -l | grep 'ccw_init.*failed with exit code 1'
Actual results:
Sometimes, there's message like
Nov 23 19:24:22 machine.example.com systemd-udevd[392]: 0.0.0601: Process 'ccw_init' failed with exit code 1.
Expected results:
No exit codes 1.
Additional info:
I believe this is a duplicate of an older bug caused by parallelism in udev. The ccw_init script is started for every of the 3 device ids, but only one of them "wins" and creates the corresponding network interface. I will be fixed via the migration to the new zdev scheme for initializing persistent devices.
Comment 3Jan Pazdziora (Red Hat)
2021-11-29 10:52:49 UTC
So does the rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0 mean there are three devices but only one interface on RHEL side? Are those devices actually needed, for z/VM?
What I'm confused about is -- I'd expect things to fail in deterministic fashion on machines on this setup because it will happen every time that one script wins. But we see it only sometimes, even on the same machine.
There are 3 low-level devices (or rather device ids or channels, for read + write + control) that are bound together via a sysfs operation to create a kernel network interface. When the device ids appears on the bus, then udev machinery starts for each of them, the first udev script binds the 3 ids together and succeeds, the other 2 might fail, depending on where they are in their execution. It could be likely fixed by adding locking or similar, but we don't plan to improve the legacy udev machinery, but rather focus on the new method (zdev).
Comment 5Jan Pazdziora (Red Hat)
2021-11-29 12:20:35 UTC
Understood, thanks for the explanation.
Does the zdev work target RHEL 9.0, or is that some longer-term goal?
Comment 7RHEL Program Management
2023-05-25 07:28:27 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.