Red Hat Bugzilla – Bug 172594
Umounting a logical volume on all nodes causes the LV to leave number of opens > 1
Last modified: 2012-05-19 19:41:01 EDT
Description of problem: Tried to delete a logical volume after unmounting it
on all nodes. Was unable to delete LV because the number of opens > 1. Used
lvchange --refresh to correct.
Version-Release number of selected component (if applicable):
Yes, but not sure of exact steps
Steps to Reproduce:
1. Create LV and mount on all nodes in cluster
2. Unmount LV on all nodes
3. Try lvremove
Actual results: Unable to remove LV
It is possible that there was also something else going on to that lv? I assuse
the volume was still active? This has always worked for QA.
On all nodes (except for the remove):
[root@link-01 ~]# mount -t gfs /dev/linear_1/linear_10 /mnt/gfs0
[root@link-01 ~]# umount /dev/linear_1/linear_10
[root@link-01 ~]# lvdisplay
--- Logical volume ---
LV Name /dev/linear_1/linear_10
VG Name linear_1
LV UUID zA0ewK-w452-Q27o-SwjC-4GnP-SLv4-EtEWPw
LV Write Access read/write
LV Status available
# open 0
LV Size 949.61 GB
Current LE 243100
Read ahead sectors 0
Block device 253:0
[root@link-01 ~]# lvremove /dev/linear_1/linear_10
Do you really want to remove active logical volume "linear_10"? [y/n]: y
Logical volume "linear_10" successfully removed
The LV was unmounted on all nodes, and no one was accessing the /dev device
directly, so I don't know what else could have been going on with the LV. It
was interesting that the problem was resolved by running lvchange --refresh.
We will try to get additional info. BTW, this was seen on the U2 GFS with
kernel 2.6.9-22 and lvm2-cluster-2.01.14-1.0.
Yes, -vvvvvvvvv on the lvremove command, and also on the lvchange --refresh too.
dmsetup info <vg>-<lv>
might also help
Have there been any further incidents of this (ideally with debug info) ?
No, we are on U3 now and have not seen any reoccurances of this problem.
Thanks for that.
I'll close this bug then. If it does recur then feel free to reopen it.
This bug as reappeared in lvm2-2.02.01-1.3.RHEL4 and lvm2-cluster-2.02.01-
1.2.RHEL4. It has only happened once in this version. The problem was
corrected using the fuser -km command. Using lvmchange --refresh did not
work. This is a very rare occurance. It seems that there is a small window
of time in which the umount is successful but an application has opened a file.
Did you run the commands with the verbose flags as requested in comment #5?
Any further occurances and any more information?
We just saw a reoccurance of this yesterday, but customer support corrected
the problem before I had a chance to look at it. I will notifiy them to run
the commands with the verbose flags the next time this happens.
Please also report which kernel you're using each time. (We've seen similar
recently with some lvm2/udev/kernel interaction but I don't know if this would
be the same problem or not.)
No further occurances of this problem since 9/22/06 (see comment #13). That
occurance was with 2-6.9-34 kernel and lvm2-cluster-2.02.01-
1.2.RHEL4.x86_64.rpm which we are still running today. No additional info
available at this time.
Will close this out for now and reopen if seen again.