This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
Bug 1919863 - dirty-rate is divided by calc-time when calculate guest dirty-rate
Summary: dirty-rate is divided by calc-time when calculate guest dirty-rate
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: All
OS: All
medium
medium
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Li Xiaohui
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-25 10:13 UTC by Li Xiaohui
Modified: 2023-06-30 17:57 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-30 17:57:39 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-689 0 None None None 2023-06-30 17:57:38 UTC

Description Li Xiaohui 2021-01-25 10:13:29 UTC
Description of problem:
dirty-rate is divided by calc-time when calculate guest dirty-rate


Version-Release number of selected component (if applicable):
host info:
kernel-4.18.0-275.el8.x86_64&qemu-kvm-5.2.0-3.module+el8.4.0+9499+42e58f08.x86_64
guest info:
kernel-4.18.0-275.el8.x86_64


How reproducible:
100%


Steps to Reproduce:
1.Boot a guest on src host, please see qemu command[1];
2.Run stress in guest
# stressapptest -M 200 -s 100000
3.Query dirty rate via qmp cmds
(1)Scenario 1: set calc-time to 3:
{"execute":"calc-dirty-rate", "arguments": {"calc-time": 3}}
(2)Scenario 2: set calc-time to 1:
{"execute":"calc-dirty-rate", "arguments": {"calc-time": 1}}
after (1) or (2), check dirty rate:
{"execute":"query-dirty-rate"}


Actual results:
After step 3, get different dirty rate between Scenario 1 and Scenario 2 on src host:
{"execute":"calc-dirty-rate", "arguments": {"calc-time": 3}}
{"return": {}}
{"execute":"query-dirty-rate"}
{"return": {"status": "measured", "dirty-rate": 61, "start-time": 218710, "calc-time": 3}}
{"execute":"calc-dirty-rate", "arguments": {"calc-time": 3}}
{"return": {}}
{"execute":"query-dirty-rate"}
{"return": {"status": "measured", "dirty-rate": 58, "start-time": 219020, "calc-time": 3}}
{"execute":"calc-dirty-rate", "arguments": {"calc-time": 1}}
{"return": {}}
{"execute":"query-dirty-rate"}
{"return": {"status": "measured", "dirty-rate": 204, "start-time": 219044, "calc-time": 1}}
{"execute":"calc-dirty-rate", "arguments": {"calc-time": 1}}
{"return": {}}
{"execute":"query-dirty-rate"}
{"return": {"status": "measured", "dirty-rate": 196, "start-time": 219068, "calc-time": 1}}
{"execute":"calc-dirty-rate", "arguments": {"calc-time": 3}}
{"return": {}}
{"execute":"query-dirty-rate"}
{"return": {"status": "measured", "dirty-rate": 60, "start-time": 219082, "calc-time": 3}}
{"execute":"calc-dirty-rate", "arguments": {"calc-time": 1}}
{"return": {}}
{"execute":"query-dirty-rate"}

Notes: have checked the stressapptest program, it works well. 


Expected results:
The dirty-rate shouldn't be divided by calc-time.


Additional info:
https://bugzilla.redhat.com/show_bug.cgi?id=1833235#c14

Comment 1 Dr. David Alan Gilbert 2021-01-25 12:51:33 UTC
This maybe a limitation of the mechanism used;  It works by doing:

   a) Start dirty tracking
   b) Wait for calc-time
   c) Stop dirty tracking
   d) Count number of pages dirty

That can't tell the difference between a large area of memory that's slowly changed during 'calc-time'
and the same area of memory that's rapidly changed repeatedly.

Using the 1 second calc-time seems to make more sense here; any scaling seems bogus.

An interesting question is whether we could provide the user with the results
with different calc-time's - that could then distinguish between the two cases.

Comment 2 John Ferlan 2021-02-08 19:37:17 UTC
Assigned to Amnon for initial triage per bz process and age of bug created or assigned to virt-maint without triage.

Comment 5 Li Xiaohui 2021-07-15 09:11:06 UTC
Hi Dave,
Hit this bz on the latest rhel9.0, shall I clone it on rhel9.0?

Comment 6 John Ferlan 2021-09-08 21:28:23 UTC
Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 8 RHEL Program Management 2022-07-25 07:28:06 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 9 Li Xiaohui 2022-07-25 08:36:57 UTC
Reopen this bug as I can reproduce it on the latest RHEL 8.7 and RHEL 9.1

Comment 12 Li Xiaohui 2023-01-03 08:16:44 UTC
Extend Stale date to +6 months for this bug as I can reproduce it on the latest RHEL 8.8 and RHEL 9.2


Note You need to log in before you can comment on or make changes to this bug.