RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1668982 - Boot speed regression with 219-62.el7_6.2 -- udev hotplug memory rules suspected
Summary: Boot speed regression with 219-62.el7_6.2 -- udev hotplug memory rules suspected
Keywords:
Status: CLOSED DUPLICATE of bug 1666612
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: systemd
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: systemd-maint
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-24 04:03 UTC by Ian Wienand
Modified: 2019-01-24 07:26 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-24 07:14:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Standard boot logs with verbose systemd logging (1.29 MB, text/plain)
2019-01-24 04:04 UTC, Ian Wienand
no flags Details
Boot logs with initramfs generated with systemd 219-62.el7_6.2 (3.72 MB, text/plain)
2019-01-24 04:06 UTC, Ian Wienand
no flags Details
View of sysfs memory directories (8.43 KB, text/plain)
2019-01-24 04:08 UTC, Ian Wienand
no flags Details


Links
System ID Private Priority Status Summary Last Updated
CentOS 15735 0 None None None 2019-01-24 04:03:04 UTC

Description Ian Wienand 2019-01-24 04:03:04 UTC
I'm trying to determine a significant boot-time regression when using an initramfs generated with 219-62.el7.  

The short version of the referenced bug is that after upgrading systemd and regenerating the initramfs, we see significantly slower boot times.

There isn't that much different between the two initramfs images; the diff is http://paste.openstack.org/show/743145/

Booting with systemd debugging turned on, I think that the hotplug rules introduced with [1] (which references #1614686) are a very likely suspect.  The slow-boot logs show a flood of stuff like 

---
...
Jan 23 23:01:14 localhost systemd-udevd[254]: PROGRAM '/bin/uname -p' /usr/lib/udev/rules.d/40-redhat.rules:8
Jan 23 23:01:14 localhost systemd-udevd[257]: seq 741 running
Jan 23 23:01:14 localhost systemd-udevd[239]: '/bin/uname -p'(out) 'x86_64'
Jan 23 23:01:14 localhost systemd-udevd[239]: '/bin/uname -p' [250] exit with return code 0
Jan 23 23:01:14 localhost systemd-udevd[239]: PROGRAM '/bin/systemd-detect-virt' /usr/lib/udev/rules.d/40-redhat.rules:11
Jan 23 23:01:14 localhost systemd-udevd[224]: seq 742 queued, 'add' 'bus'
Jan 23 23:01:14 localhost systemd-udevd[224]: maximum number (10) of children reached
Jan 23 23:01:14 localhost systemd-udevd[224]: seq 743 queued, 'add' 'bus'
Jan 23 23:01:14 localhost systemd-udevd[224]: maximum number (10) of children reached
Jan 23 23:01:14 localhost systemd-udevd[224]: maximum number (10) of children reached
...
---

grepping for the "systemd-detect-virt" invocations shows it runs *a lot* 

---
cat boot-slow-trunc.log | grep systemd-detect-virt | grep qemu | wc -l
143
---

In the "fast" boot logs, I don't see anything at all seemingly related to memory hotplug coming up.  

The rule is quite short

---
 # Memory hotadd request
-SUBSYSTEM=="memory", ACTION=="add", PROGRAM="/bin/uname -p", RESULT!="s390*", ATTR{state}=="offline", ATTR{state}="online"
+SUBSYSTEM!="memory", ACTION!="add", GOTO="memory_hotplug_end"
+PROGRAM="/bin/uname -p", RESULT=="s390*", GOTO="memory_hotplug_end"
+
+ENV{.state}="online"
+PROGRAM="/bin/systemd-detect-virt", RESULT=="none", ENV{.state}="online_movable"
+ATTR{state}=="offline", ATTR{state}="$env{.state}"
+
+LABEL="memory_hotplug_end"
---

I'm not sure I can see anything wrong with this as such, it seems like this would correctly skip any events it wasn't interested in.  Maybe there's just way more memory events than anticipated?  

This is running in a binary-translation qemu, which is essentially what is happening in the CI jobs where we noticed this.  The slower boot times started making jobs time out.




[1] https://git.centos.org/blob/rpms!systemd.git/c7/SOURCES!0659-rules-implement-new-memory-hotplug-policy.patch

Comment 2 Ian Wienand 2019-01-24 04:04:11 UTC
Created attachment 1522939 [details]
Standard boot logs with verbose systemd logging

Comment 3 Ian Wienand 2019-01-24 04:06:30 UTC
Created attachment 1522940 [details]
Boot logs with initramfs generated with systemd 219-62.el7_6.2

Comment 4 Ian Wienand 2019-01-24 04:08:48 UTC
Created attachment 1522941 [details]
View of sysfs memory directories

Comment 5 Jan Synacek 2019-01-24 07:14:48 UTC

*** This bug has been marked as a duplicate of bug 1666612 ***

Comment 6 Ian Wienand 2019-01-24 07:26:41 UTC
Actually, no looking at it, the rule is subtly wrong (thanks to Jens Harbott for pointing me in this direction :) -- this should not trigger on something like a block device

---
# udevadm test /block/sr0
calling: test
version 219
...
rules contain 24576 bytes tokens (2048 * 12 bytes), 12369 bytes strings
1853 strings (23179 bytes), 1230 de-duplicated (11434 bytes), 624 trie nodes used
PROGRAM '/bin/uname -p' /usr/lib/udev/rules.d/40-redhat.rules:8
starting '/bin/uname -p'
'/bin/uname -p'(out) 'x86_64'
'/bin/uname -p' [5462] exit with return code 0
PROGRAM '/bin/systemd-detect-virt' /usr/lib/udev/rules.d/40-redhat.rules:11
starting '/bin/systemd-detect-virt'
'/bin/systemd-detect-virt'(out) 'qemu'
'/bin/systemd-detect-virt' [5463] exit with return code 0
...

ACTION=add
SUBSYSTEM=block

---

The new rule is:

 SUBSYSTEM!="memory", ACTION!="add", GOTO="memory_hotplug_end"

so when "SUBSYSTEM=block, ACTION=add" we get "True, False" and don't take the GOTO ...

This explains the initial symptoms reported in [1] very well; a large regression in the boot phase where it's going through adding devices, due to the extra overheads of the forks in here.

[1] https://etherpad.openstack.org/p/dib-centos-slow


Note You need to log in before you can comment on or make changes to this bug.