Bug 1866110
Summary: | automated TSEG size calculation | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | Laszlo Ersek <lersek> | ||||||||
Component: | qemu-kvm | Assignee: | Igor Mammedov <imammedo> | ||||||||
qemu-kvm sub component: | Devices | QA Contact: | Xueqiang Wei <xuwei> | ||||||||
Status: | CLOSED MIGRATED | Docs Contact: | |||||||||
Severity: | unspecified | ||||||||||
Priority: | unspecified | CC: | berrange, chayang, coli, imammedo, jinzhao, juzhang, nanliu, nilal, pmendezh, virt-maint | ||||||||
Version: | unspecified | Keywords: | FutureFeature, MigratedToJIRA, Reopened, Triaged | ||||||||
Target Milestone: | rc | Flags: | xuwei:
needinfo-
|
||||||||
Target Release: | --- | ||||||||||
Hardware: | Unspecified | ||||||||||
OS: | Unspecified | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Enhancement | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2023-09-22 17:32:31 UTC | Type: | Story | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | |||||||||||
Bug Blocks: | 1788991 | ||||||||||
Attachments: |
|
Description
Laszlo Ersek
2020-08-04 21:32:59 UTC
Forgot to mention, regarding "use most of the disk as extra swap": the "fallocate" example in the mkswap(8) manual does not work on xfs. When "swapon" is invoked subsequently, the kernel complains (in dmesg) that the file has "holes". The point of "fallocate" is *exactly* to prevent holes in the file, without having to write every single byte of it. So, it doesn't work. The internet knows this; I was led to multiple CentOS/RHEL themed blog posts that recommended "dd" instead. So then I spent the next few tens of minutes waiting on "dd" to write a 700 GB swap file, on machine [2], so I'd have enough RAM+swap combined for 1.5TB of guest RAM. As I said, an exercise in frustration and tedium. Created attachment 1710546 [details]
Illustration of PDP1GB results.
Created attachment 1710547 [details]
Illustration of no-PDP1GB results.
I used gnuplot to attempt to provide some illustration of the data to aid in coming up with a viable rule. Bear in mind the grid lines and heights are interpolated, since it is drawing a grid points at fixed intervals, but our data is at power-of-2 intervals. I scaled RAM to GB instead of MB. The data is kind of noisy, so I'm wondering how stable the results obtained are. If we assume they are stable though, considering both the PDP1GB and non-PDP1G results together, the data suggests a possibly viable rule is - Use 32 MB TSeg if either more than 350 CPUs are present, or if more than 768 GB of RAM are present - Ootherwise default 16 MB Tseg is sufficient. Thanks Dan for the plot! I had thought of it, but really only as "it would be nice if someone plotted this". I consider the results stable. And indeed the plots convey a cryptic message :/ Your rule sounds good to me, but it should be tested in the environment where bug 1788991 too is going to be tested. (The existence of bug 1788991 suggests such an environment *exists*.) I don't know if we'll need another bump at higher boundaries. Thank you! Amnon - passing your way since as I read this the "global mch.extended-tseg-mbytes" for q35 needs some sort of adjustment. Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. Re-opening as we still need to fix the issue reported in this BZ. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. Setting ITR=9.2.0 - we need to come to a resolution/decision on this one way or another - fix it or close wont/cant fix. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug. This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. |