Bug 802251
Summary: | kvm-perf backport: Lookup iobus devices by bsearch and resize kvm_io_range array dynamically | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Amos Kong <akong> |
Component: | kernel | Assignee: | Amos Kong <akong> |
Status: | CLOSED WONTFIX | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 6.3 | CC: | ailan, juzhang, knoel, michen, mst, mtosatti, rhod, wquan |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-07-26 10:59:25 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Attachments: |
Description
Amos Kong
2012-03-12 08:23:58 UTC
Test method: 1. Create a guest. 2. Create 200 ioeventfds in that guest. 3. Write to a memory address in the guest which isn't taken by the ioeventfds (i.e., worst case scenario for the lookup). 4. Measure vm exits per second. test 60 seconds python script to write addr which is not taken by ioeventfds: ----------------- import random, os while True: #r = random.SystemRandom() #port = r.randint(0,256) port = 13 outw_cmd = ("echo -e '\\%s' | dd of=/dev/port seek=%d bs=1 count=2" % (oct(0), port)) print outw_cmd os.system(outw_cmd) Created attachment 572696 [details]
哈慈看KVM-tools patch
Patch in comment #4 is used to hack kvm-tools to allocate multiple ioeventfds on startup. Since creating actual devices might impact testing. Hi miya, Please help to test if performance improvement exists with patches in commit #0, it's host kernel patch. test steps: 1. clone kvm-tools code, apply patches in Comment #4, and compile 2. start a guest with changed kvm-tools 3. Use python script [1] to write a memory address in the guest which isn't taken by the ioeventfds (worst case scenario for the lookup). 4. Measure vm exits per second. test 600 seconds repeat test for 3 times. script [1] -------------------- import random, os while True: r = random.SystemRandom() port = r.randint(0,256) outw_cmd = ("echo -e '\\%s' | dd of=/dev/port seek=%d bs=1 count=2" % (oct(0), port)) print outw_cmd os.system(outw_cmd) -------------------- === Test results from juzhang === conclusion summary : io_exits is improved about %3.89, exits is improved about %1.61 2.6.32-286.x86_64 ####################starting################### exits 17519340 80647 io_exits 3752898 17628 ***********1 times******* io_exits 5520938 17720 exits 25685528 81403 ***********2 times******* io_exits 7300161 17302 exits 33878570 79823 ***********3 times******* io_exits 9069556 17263 exits 42049359 80426 ***********4 times******* exits 50234487 80278 io_exits 10845042 17400 ***********5 times******* io_exits 12613537 17103 exits 58401220 79647 2.6.32-286.el6802251.x86_64 ####################starting################### exits 34321652 82416 io_exits 7554828 18258 ***********1 times******* exits 42621517 83866 io_exits 9392517 18754 ***********2 times******* io_exits 11235984 17860 exits 50939770 81866 ***********3 times******* exits 59252510 82420 io_exits 13077731 18297 ***********4 times******* exits 67561730 81607 io_exits 14918771 18076 ***********5 times******* io_exits 16760017 18278 exits 75862582 82110 So I would backport it to internal. (In reply to comment #8) > === Test results from juzhang === > > conclusion summary : io_exits is improved about %3.89, exits is improved > about %1.61 > > 2.6.32-286.x86_64 What are the numbers? The LHS numbers show a big increase, but I do not know what they are. > ####################starting################### > exits 17519340 80647 > io_exits 3752898 17628 > ***********1 times******* > io_exits 5520938 17720 > exits 25685528 81403 > ***********2 times******* > io_exits 7300161 17302 > exits 33878570 79823 > ***********3 times******* > io_exits 9069556 17263 > exits 42049359 80426 > ***********4 times******* > exits 50234487 80278 > io_exits 10845042 17400 > ***********5 times******* > io_exits 12613537 17103 > exits 58401220 79647 > > 2.6.32-286.el6802251.x86_64 > ####################starting################### > exits 34321652 82416 > io_exits 7554828 18258 > ***********1 times******* > exits 42621517 83866 > io_exits 9392517 18754 > ***********2 times******* > io_exits 11235984 17860 > exits 50939770 81866 > ***********3 times******* > exits 59252510 82420 > io_exits 13077731 18297 > ***********4 times******* > exits 67561730 81607 > io_exits 14918771 18076 > ***********5 times******* > io_exits 16760017 18278 > exits 75862582 82110 > > > So I would backport it to internal. (In reply to comment #9) > (In reply to comment #8) > > === Test results from juzhang === > > > > conclusion summary : io_exits is improved about %3.89, exits is improved > > about %1.61 > > > > 2.6.32-286.x86_64 > > What are the numbers? The LHS numbers show a big increase, but I do not know > what they are. > > ####################starting################### > > exits 17519340 80647 > > io_exits 3752898 17628 initial 'exits','io_exits' count (please ignore last column) > > ***********1 times******* > > io_exits 5520938 17720 > > exits 25685528 81403 count numbers after 100 seconds > > ***********2 times******* > > io_exits 7300161 17302 > > exits 33878570 79823 count numbers after 200 seconds > > ***********3 times******* > > io_exits 9069556 17263 > > exits 42049359 80426 count numbers after 300 seconds > > ***********4 times******* > > exits 50234487 80278 > > io_exits 10845042 17400 count numbers after 400 seconds > > ***********5 times******* > > io_exits 12613537 17103 > > exits 58401220 79647 count numbers after 500 seconds compute the rise number during 500 seconds. io_exits: X1 = 12613537 - 3752898 = 8860639 exits: Y1 = 58401220 - 17519340 = 40881880 > > 2.6.32-286.el6802251.x86_64 > > ####################starting################### > > exits 34321652 82416 > > io_exits 7554828 18258 > > ***********1 times******* > > exits 42621517 83866 > > io_exits 9392517 18754 > > ***********2 times******* > > io_exits 11235984 17860 > > exits 50939770 81866 > > ***********3 times******* > > exits 59252510 82420 > > io_exits 13077731 18297 > > ***********4 times******* > > exits 67561730 81607 > > io_exits 14918771 18076 > > ***********5 times******* > > io_exits 16760017 18278 > > exits 75862582 82110 compute the rise number during 500 seconds. io_exits: X2 = 16760017 - 7554828 = 9205189 exits: Y2 = 75862582 - 34321652 = 41540930 (X2 - X1 ) / X1 = 0.038885 (Y2 - Y1 ) / Y1 = 0.016121 It means the search speed of io_bus dev was improved, then more io_exits & exits can occur. hi avi, do you think it's necessary to backport those patches to internal? IMO, no. The improvement is only with a large number of ioeventfds, yes? That's not a common configuration. (In reply to comment #12) > IMO, no. The improvement is only with a large number of ioeventfds, yes? Yes, the 'improvement' only be obvious with a larger number of ioeventfds. > That's not a common configuration. So close this bug as WONTFIX. |