Bug 139887 - lvremove of more than one invalid lv causes a hang
lvremove of more than one invalid lv causes a hang
Status: CLOSED CURRENTRELEASE
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: lvm2-cluster (Show other bugs)
4
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Alasdair Kergon
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2004-11-18 12:07 EST by Corey Marthaler
Modified: 2010-01-11 23:02 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-01-10 18:47:55 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2004-11-18 12:07:21 EST
Description of problem:
[root@morph-02 root]# pvremove foo bar
  foo: Couldn't find device.  Check your filters?
  bar: Couldn't find device.  Check your filters?

[root@morph-02 root]# vgremove foo bar
  Volume group "foo" doesn't exist
  Volume group "bar" doesn't exist

[root@morph-02 root]# lvremove foo bar
  Volume group "foo" not found
  cluster send request failed: Invalid argument

[HANG]

strace:
.
.
.
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
stat64("/proc/lvm/VGs/bar", 0xbfffdf00) = -1 ENOENT (No such file or
directory)
rt_sigprocmask(SIG_SETMASK, ~[RTMIN], [], 8) = 0
write(3, "3\1\0\0\0\0\0\0\0\0\0\0\10\0\0\0\0\4\0V_bar\0\377", 26) = 26
read(3, "o\1\0\0\0\350g\320\0\352\377\377\377\1\0\0\0\0", 18) = 18
read(3, "\0o", 511)                     = 2
read(3, "\1\0\0\0\260|\10\10\0\0\0\0\17\0\0\0\0morph-02\0\0\0\0"...,
509) = 33
read(3,

How reproducible:
Always
Comment 1 Alasdair Kergon 2005-01-04 10:39:12 EST
Which version of LVM2 was this tested with? 
('lvm version')

Please retest with 2.00.31 or later to see it's since been fixed.
Comment 2 Alasdair Kergon 2005-01-04 10:40:50 EST
If it still fails, please send me the output of 'lvremove -vvvv foo bar'
Comment 3 Christine Caulfield 2005-01-07 08:06:29 EST
It's clvmd getting confused.

What's confusing it is that LVM is trying to unlock the same VG twice:
      Locking V_one at 0x6
  Volume group "one" not found
      Locking V_one at 0x6
  cluster send request failed: Invalid argument

But it should be able to cope with it, of course...
Comment 4 Christine Caulfield 2005-01-07 09:24:20 EST
This should fix the hang. I'll leave it for Alasdair to fix the
double-unlock (which I also see with lvdisplay if that helps)

Checking in lib/locking/cluster_locking.c;
/cvs/lvm2/LVM2/lib/locking/cluster_locking.c,v  <--  cluster_locking.c
new revision: 1.2; previous revision: 1.1
done
Comment 5 Corey Marthaler 2005-01-10 18:47:55 EST
fix verified.

Note You need to log in before you can comment on or make changes to this bug.