Bug 172594 - Umounting a logical volume on all nodes causes the LV to leave number of opens > 1
Summary: Umounting a logical volume on all nodes causes the LV to leave number of open...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: lvm2-cluster
Version: 4
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Christine Caulfield
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-11-07 18:22 UTC by Henry Harris
Modified: 2012-05-19 23:41 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2006-11-27 17:14:45 UTC
Embargoed:


Attachments (Terms of Use)

Description Henry Harris 2005-11-07 18:22:29 UTC
Description of problem: Tried to delete a logical volume after unmounting it 
on all nodes.  Was unable to delete LV because the number of opens > 1.  Used 
lvchange --refresh to correct.


Version-Release number of selected component (if applicable):


How reproducible:
Yes, but not sure of exact steps

Steps to Reproduce:
1. Create LV and mount on all nodes in cluster
2. Unmount LV on all nodes
3. Try lvremove
  
Actual results: Unable to remove LV


Expected results:
LV removed

Additional info:

Comment 1 Corey Marthaler 2005-11-07 18:31:04 UTC
It is possible that there was also something else going on to that lv? I assuse
the volume was still active? This has always worked for QA.

On all nodes (except for the remove):
[root@link-01 ~]# mount -t gfs /dev/linear_1/linear_10 /mnt/gfs0
[root@link-01 ~]# umount /dev/linear_1/linear_10
[root@link-01 ~]# lvdisplay
  --- Logical volume ---
  LV Name                /dev/linear_1/linear_10
  VG Name                linear_1
  LV UUID                zA0ewK-w452-Q27o-SwjC-4GnP-SLv4-EtEWPw
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                949.61 GB
  Current LE             243100
  Segments               6
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0

[root@link-01 ~]# lvremove /dev/linear_1/linear_10
Do you really want to remove active logical volume "linear_10"? [y/n]: y
  Logical volume "linear_10" successfully removed

Comment 2 Henry Harris 2005-11-07 19:08:01 UTC
The LV was unmounted on all nodes, and no one was accessing the /dev device 
directly, so I don't know what else could have been going on with the LV.  It 
was interesting that the problem was resolved by running lvchange --refresh.  
We will try to get additional info.  BTW, this was seen on the U2 GFS with 
kernel 2.6.9-22 and lvm2-cluster-2.01.14-1.0.

Comment 5 Christine Caulfield 2005-11-21 08:26:44 UTC
Yes, -vvvvvvvvv on the lvremove command, and also on the lvchange --refresh too.

dmsetup info <vg>-<lv>

might also help

Comment 7 Christine Caulfield 2006-05-05 07:30:13 UTC
Have there been any further incidents of this (ideally with debug info) ?

Comment 8 Henry Harris 2006-05-05 15:06:07 UTC
No, we are on U3 now and have not seen any reoccurances of this problem.

Comment 9 Christine Caulfield 2006-05-05 15:18:21 UTC
Thanks for that.

I'll close this bug then. If it does recur then feel free to reopen it.

Comment 10 Henry Harris 2006-07-24 19:30:07 UTC
This bug as reappeared in lvm2-2.02.01-1.3.RHEL4 and lvm2-cluster-2.02.01-
1.2.RHEL4.  It has only happened once in this version.  The problem was 
corrected using the fuser -km command.  Using lvmchange --refresh did not 
work.  This is a very rare occurance.  It seems that there is a small window 
of time in which the umount is successful but an application has opened a file.

Comment 11 Kiersten (Kerri) Anderson 2006-07-26 18:42:21 UTC
Did you run the commands with the verbose flags as requested in comment #5?

Comment 12 Kiersten (Kerri) Anderson 2006-09-22 18:57:47 UTC
Any further occurances and any more information?

Comment 13 Henry Harris 2006-09-22 19:21:00 UTC
We just saw a reoccurance of this yesterday, but customer support corrected 
the problem before I had a chance to look at it.  I will notifiy them to run 
the commands with the verbose flags the next time this happens.

Comment 14 Alasdair Kergon 2006-10-18 18:23:23 UTC
Please also report which kernel you're using each time.  (We've seen similar
recently with some lvm2/udev/kernel interaction but I don't know if this would
be the same problem or not.)

Comment 15 Henry Harris 2006-11-27 17:06:04 UTC
No further occurances of this problem since 9/22/06 (see comment #13).  That 
occurance was with 2-6.9-34 kernel and lvm2-cluster-2.02.01-
1.2.RHEL4.x86_64.rpm which we are still running today.  No additional info 
available at this time.

Comment 16 Corey Marthaler 2006-11-27 17:14:45 UTC
Will close this out for now and reopen if seen again.


Note You need to log in before you can comment on or make changes to this bug.