This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 843073 - Add the io_timeout to the lockspace api, and make it safe to adjust.
Add the io_timeout to the lockspace api, and make it safe to adjust.
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: sanlock (Show other bugs)
6.3
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: David Teigland
Leonid Natapov
storage
: FutureFeature, Rebase
Depends On:
Blocks: 800588 906027
  Show dependency treegraph
 
Reported: 2012-07-25 09:55 EDT by Leonid Natapov
Modified: 2013-02-21 03:53 EST (History)
4 users (show)

See Also:
Fixed In Version: sanlock-2.6-1.el6
Doc Type: Rebase: Bug Fixes and Enhancements
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-21 03:53:14 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Leonid Natapov 2012-07-25 09:55:22 EDT
The sanlock daemon -o option can be used to set theio timeout, but it's essential that all hosts use the same value, and nothing verifies/enforces this consistency.  We need to add the io_timeout to the lockspace api, and make it safe to adjust.
Comment 2 David Teigland 2012-09-24 11:19:52 EDT
I think RHEV will want to make use of adjustable i/o timeouts with sanlock, since i/o timeouts are currently adjustable when not using sanlock.
Comment 3 Leonid Natapov 2012-10-04 05:38:04 EDT
David,what we have to test here ? Give us instructions for QA,please.
Comment 5 David Teigland 2012-10-04 12:51:16 EDT
Leonid, nothing uses this change yet, although I expect that vdsm will use it.  In the next 6.4 build I'll include Federico's python patch and we'll add a comment here containing the steps to use it.
Comment 6 David Teigland 2012-10-05 11:36:41 EDT
(This it the test, but a build with command line support for running the test is not yet ready.)

Here are steps to verify that sanlock is correctly accounting for different io timeouts.  The client add_lockspace command should take 2x the prev read_leader io_timeout (plus variable 1-2 seconds).  The add_lockspace is no longer based on the timeout used by the current command (the -o value or 10 if no -o is specified.)

service wdmd start
service sanlock start

dd if=/dev/null of=/root/testfile count=1024 bs=1024
chmod 666 /root/testfile
sanlock direct init -s test:0:/root/testfile:0

sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 10
# next add_lockspace should take 20 sec

time sanlock client add_lockspace -s test:1:/root/testfile:0
real 0m21.015s

sanlock client rem_lockspace -s test:1:/root/testfile:0
 
sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 10
# next add_lockspace should take 20 sec

time sanlock client add_lockspace -o 20 -s test:1:/root/testfile:0
real 0m21.020s

sanlock client rem_lockspace -s test:1:/root/testfile:0

sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 20
# next add_lockspace should take 40 sec

time sanlock client add_lockspace -o 10 -s test:1:/root/testfile:0
real 0m41.009s

sanlock client rem_lockspace -s test:1:/root/testfile:0

sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 10
# next add_lockspace should take 20 sec

time sanlock client add_lockspace -o 5 -s test:1:/root/testfile:0
real 0m21.005s

sanlock client rem_lockspace -s test:1:/root/testfile:0

sanlock direct read_leader -s test:1:/root/testfile:0 | grep io_timeout
io_timeout 5
# next add_lockspace should take 10 sec

time sanlock client add_lockspace -o 20 -s test:1:/root/testfile:0
real 0m11.005s
Comment 7 David Teigland 2012-10-09 12:28:56 EDT
This can now be tested with build sanlock-2.6-1
Comment 8 Leonid Natapov 2013-01-20 10:53:22 EST
sanlock-2.6-2.el6.x86_64
Comment 10 errata-xmlrpc 2013-02-21 03:53:14 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0530.html

Note You need to log in before you can comment on or make changes to this bug.