Bug 843073

Summary: Add the io_timeout to the lockspace api, and make it safe to adjust.
Product: Red Hat Enterprise Linux 6 Reporter: Leonid Natapov <lnatapov>
Component: sanlockAssignee: David Teigland <teigland>
Status: CLOSED ERRATA QA Contact: Leonid Natapov <lnatapov>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.3CC: cluster-maint, fsimonce, hateya, lhh
Target Milestone: rcKeywords: FutureFeature, Rebase
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: sanlock-2.6-1.el6 Doc Type: Rebase: Bug Fixes and Enhancements
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-21 08:53:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 800588, 906027    

Description Leonid Natapov 2012-07-25 13:55:22 UTC
The sanlock daemon -o option can be used to set theio timeout, but it's essential that all hosts use the same value, and nothing verifies/enforces this consistency.  We need to add the io_timeout to the lockspace api, and make it safe to adjust.

Comment 2 David Teigland 2012-09-24 15:19:52 UTC
I think RHEV will want to make use of adjustable i/o timeouts with sanlock, since i/o timeouts are currently adjustable when not using sanlock.

Comment 3 Leonid Natapov 2012-10-04 09:38:04 UTC
David,what we have to test here ? Give us instructions for QA,please.

Comment 5 David Teigland 2012-10-04 16:51:16 UTC
Leonid, nothing uses this change yet, although I expect that vdsm will use it.  In the next 6.4 build I'll include Federico's python patch and we'll add a comment here containing the steps to use it.

Comment 6 David Teigland 2012-10-05 15:36:41 UTC
(This it the test, but a build with command line support for running the test is not yet ready.)

Here are steps to verify that sanlock is correctly accounting for different io timeouts.  The client add_lockspace command should take 2x the prev read_leader io_timeout (plus variable 1-2 seconds).  The add_lockspace is no longer based on the timeout used by the current command (the -o value or 10 if no -o is specified.)

service wdmd start
service sanlock start

dd if=/dev/null of=/root/testfile count=1024 bs=1024
chmod 666 /root/testfile
sanlock direct init -s test:0:/root/testfile:0

sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 10
# next add_lockspace should take 20 sec

time sanlock client add_lockspace -s test:1:/root/testfile:0
real 0m21.015s

sanlock client rem_lockspace -s test:1:/root/testfile:0
 
sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 10
# next add_lockspace should take 20 sec

time sanlock client add_lockspace -o 20 -s test:1:/root/testfile:0
real 0m21.020s

sanlock client rem_lockspace -s test:1:/root/testfile:0

sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 20
# next add_lockspace should take 40 sec

time sanlock client add_lockspace -o 10 -s test:1:/root/testfile:0
real 0m41.009s

sanlock client rem_lockspace -s test:1:/root/testfile:0

sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 10
# next add_lockspace should take 20 sec

time sanlock client add_lockspace -o 5 -s test:1:/root/testfile:0
real 0m21.005s

sanlock client rem_lockspace -s test:1:/root/testfile:0

sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 5
# next add_lockspace should take 10 sec

time sanlock client add_lockspace -o 20 -s test:1:/root/testfile:0
real 0m11.005s

Comment 7 David Teigland 2012-10-09 16:28:56 UTC
This can now be tested with build sanlock-2.6-1

Comment 8 Leonid Natapov 2013-01-20 15:53:22 UTC
sanlock-2.6-2.el6.x86_64

Comment 10 errata-xmlrpc 2013-02-21 08:53:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0530.html