Bug 960989 - [sanlock] Provide a command to request the release of a resource currently held by another host/pid
Summary: [sanlock] Provide a command to request the release of a resource currently he...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: sanlock
Version: 6.5
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: Leonid Natapov
URL:
Whiteboard:
Depends On:
Blocks: 883504
TreeView+ depends on / blocked
 
Reported: 2013-05-08 13:17 UTC by Federico Simoncelli
Modified: 2013-11-21 11:48 UTC (History)
5 users (show)

Fixed In Version: sanlock-2.8-1
Doc Type: Rebase: Bug Fixes and Enhancements
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-11-21 11:48:38 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1632 normal SHIPPED_LIVE sanlock bug fix and enhancement update 2013-11-21 01:32:25 UTC

Description Federico Simoncelli 2013-05-08 13:17:07 UTC
Description of problem:
Sanlock should provide a way to request the release of a resource currently held by another host.

Additional info:
This feature is currently implemented in the daemon but it's not exposed through the python bindings. A patch has been posted upstream:

https://lists.fedorahosted.org/pipermail/sanlock-devel/2013-May/000367.html

As far as the use case is concerned this feature is probably dependent on some previous patches:

9863624 python: return lease version in read_resource
0ce413a python: add the versioning support to acquire

The API call named "request" (for consistency with the name exposed by the C API) is defined as follows:

request(lockspace, resource, disks [, action=REQ_RELEASE, version=0])

The possible values for action are REQ_RELEASE and REQ_FORCE (which escalates to sigkill if the regular graceful release is not satisfied).

Comment 2 yeylon@redhat.com 2013-07-01 20:14:16 UTC
how do we test this?

Comment 3 Federico Simoncelli 2013-07-01 21:15:43 UTC
(In reply to Yaniv Eylon from comment #2)
> how do we test this?

Using a shared storage, two hosts and sanlock commands (even better python scripts). We'll add the technical details (e.g. sanlock commands or python scripts) ASAP.

Comment 4 David Teigland 2013-07-01 21:48:37 UTC
Here is a sequence of commands to test the capability without the need for a separate test program.  It uses a shared vg "test" with a shared lv "test".


host1,2: service wdmd start
host1,2: service sanlock start

host1:
lvcreate -n test -L1G test
sanlock client init -s test:0:/dev/test/test:0
sanlock client init -r test:test:/dev/test/test:1048576

host1: sanlock client add_lockspace -s test:1:/dev/test/test:0
host2: sanlock client add_lockspace -s test:2:/dev/test/test:0

host1: sanlock client command -r test:test:/dev/test/test:1048576 -c /bin/sleep 1000

(This acquires the lock then runs "sleep" for 1000 seconds.)

host2: sanlock client request -r test:test:/dev/test/test:1048576:2 -f 1

(After this, the "sleep" on host1 should be killed within 30 seconds.)

Comment 7 Leonid Natapov 2013-08-28 12:14:07 UTC
sanlock-2.8-1.el6.x86_64.
tested according to instructions in comment #4.

Comment 9 errata-xmlrpc 2013-11-21 11:48:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1632.html


Note You need to log in before you can comment on or make changes to this bug.