Bug 578413 - vgremove does not take blocking lock and fails if run concurrently with other lvm commands
vgremove does not take blocking lock and fails if run concurrently with other...
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: lvm2 (Show other bugs)
All Linux
high Severity high
: rc
: ---
Assigned To: Milan Broz
Corey Marthaler
: ZStream
Depends On:
Blocks: 577624 582232
  Show dependency treegraph
Reported: 2010-03-31 05:00 EDT by Ayal Baron
Modified: 2014-03-16 21:51 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2011-01-13 17:40:54 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
strace output of failed vgremove command (14.69 KB, application/x-bzip)
2010-03-31 06:38 EDT, Cyril Plisko
no flags Details
vgremove -vvvv output (3.03 KB, application/x-bzip)
2010-03-31 06:39 EDT, Cyril Plisko
no flags Details

  None (edit)
Description Ayal Baron 2010-03-31 05:00:16 EDT
Description of problem:
Concurrently running 2 vgremove commands (on different vgs) causes one command to fail.  This also happens when running vgremove with pvs and probably other lvm commands.
See https://bugzilla.redhat.com/show_bug.cgi?id=516773 for similar issue.

Error is:
/var/lock/lvm/P_orphans: flock failed: Resource temporarily unavailable\n 
Can't get lock for orphan PVs\n"; <rc> = 5
This is blocking vdsm: https://bugzilla.redhat.com/show_bug.cgi?id=577624

How reproducible:
Comment 1 Milan Broz 2010-03-31 05:07:34 EDT
(I expect the wait_for_locks = 1 is set in lvm.conf here - it was intended to be configurable.)

please can you post ouptut of failing command with -vvvv,
(and strace will probably help here too.)

Comment 2 Cyril Plisko 2010-03-31 06:38:32 EDT
Created attachment 403697 [details]
strace output of failed vgremove command
Comment 3 Cyril Plisko 2010-03-31 06:39:01 EDT
Created attachment 403698 [details]
vgremove -vvvv output
Comment 4 Milan Broz 2010-03-31 07:03:04 EDT
ah, ok. I missed removing "on different vgs" - I can reproduce that easily, it is clear bug.
Comment 5 Ayal Baron 2010-03-31 08:07:49 EDT
just for the record, the host is configured with "wait_for_locks = 1".
Comment 6 Milan Broz 2010-03-31 10:08:18 EDT
Patch sent to lvm-devel for review, one-liner, but it touch locking core...
Comment 7 Milan Broz 2010-03-31 13:33:58 EDT
Patch is now upstream, need some time for testing.
Comment 8 Milan Broz 2010-04-13 12:16:34 EDT
Patch added to lvm2-2.02.56-9.el5
Comment 9 Corey Marthaler 2010-04-13 15:33:15 EDT
What am I doing wrong that I can't reproduce this issue?

I created two different VGs, populated them with LVs, and then attempted to remove both simultaneously. I've tried with both VGs deactivated and with both activated, and by doing the remove from different nodes in the cluster and from the same node in the cluster.

Every time the remove works fine.
Comment 10 Milan Broz 2010-04-13 15:45:43 EDT
It is hard to reproduce, because it is race - I was able to reproduce only when stopped debugger before first vgremove unlocked orphan lock.

Maybe try run one loop where creating/removing VG1 and second loop doing the same in parallel for VG2 (different PVs), with local locking.

(explanation from commit log:)

This fixes problem with orphan locking, e.g.
    vgremove VG1    |    vgremove VG2
    lock(VG1)       |    lock(VG2)
    lock(ORPHAN)    |    lock(ORPHAN) -> fail, non-blocking
Comment 11 Corey Marthaler 2010-04-13 17:09:27 EDT
The multiple create/delete loops with local locking causes the issue fairly fast:

  /var/lock/lvm/P_orphans: flock failed: Resource temporarily unavailable
  Can't get lock for orphan PVs
Comment 12 Corey Marthaler 2010-04-13 17:15:20 EDT
Fix verified in lvm2-2.02.56-9.el5.
Comment 15 errata-xmlrpc 2011-01-13 17:40:54 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.