Bug 843073
| Summary: | Add the io_timeout to the lockspace api, and make it safe to adjust. | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Leonid Natapov <lnatapov> |
| Component: | sanlock | Assignee: | David Teigland <teigland> |
| Status: | CLOSED ERRATA | QA Contact: | Leonid Natapov <lnatapov> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.3 | CC: | cluster-maint, fsimonce, hateya, lhh |
| Target Milestone: | rc | Keywords: | FutureFeature, Rebase |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | storage | ||
| Fixed In Version: | sanlock-2.6-1.el6 | Doc Type: | Rebase: Bug Fixes and Enhancements |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-02-21 08:53:14 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 800588, 906027 | ||
|
Description
Leonid Natapov
2012-07-25 13:55:22 UTC
I think RHEV will want to make use of adjustable i/o timeouts with sanlock, since i/o timeouts are currently adjustable when not using sanlock. David,what we have to test here ? Give us instructions for QA,please. Leonid, nothing uses this change yet, although I expect that vdsm will use it. In the next 6.4 build I'll include Federico's python patch and we'll add a comment here containing the steps to use it. (This it the test, but a build with command line support for running the test is not yet ready.) Here are steps to verify that sanlock is correctly accounting for different io timeouts. The client add_lockspace command should take 2x the prev read_leader io_timeout (plus variable 1-2 seconds). The add_lockspace is no longer based on the timeout used by the current command (the -o value or 10 if no -o is specified.) service wdmd start service sanlock start dd if=/dev/null of=/root/testfile count=1024 bs=1024 chmod 666 /root/testfile sanlock direct init -s test:0:/root/testfile:0 sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout io_timeout 10 # next add_lockspace should take 20 sec time sanlock client add_lockspace -s test:1:/root/testfile:0 real 0m21.015s sanlock client rem_lockspace -s test:1:/root/testfile:0 sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout io_timeout 10 # next add_lockspace should take 20 sec time sanlock client add_lockspace -o 20 -s test:1:/root/testfile:0 real 0m21.020s sanlock client rem_lockspace -s test:1:/root/testfile:0 sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout io_timeout 20 # next add_lockspace should take 40 sec time sanlock client add_lockspace -o 10 -s test:1:/root/testfile:0 real 0m41.009s sanlock client rem_lockspace -s test:1:/root/testfile:0 sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout io_timeout 10 # next add_lockspace should take 20 sec time sanlock client add_lockspace -o 5 -s test:1:/root/testfile:0 real 0m21.005s sanlock client rem_lockspace -s test:1:/root/testfile:0 sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout io_timeout 5 # next add_lockspace should take 10 sec time sanlock client add_lockspace -o 20 -s test:1:/root/testfile:0 real 0m11.005s This can now be tested with build sanlock-2.6-1 sanlock-2.6-2.el6.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0530.html |