RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 721122 - lvremove cannot remove "open" snapshot
Summary: lvremove cannot remove "open" snapshot
Keywords:
Status: CLOSED DUPLICATE of bug 570359
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Peter Rajnoha
QA Contact: Corey Marthaler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-07-13 19:17 UTC by Brian Wheeler
Modified: 2011-08-05 17:24 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-05 17:21:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
an online fsck check (3.27 KB, application/octet-stream)
2011-07-13 19:17 UTC, Brian Wheeler
no flags Details

Description Brian Wheeler 2011-07-13 19:17:13 UTC
Created attachment 512728 [details]
an online fsck check

Description of problem:

I've written an online fsck which creates snapshot volumes, but I cannot remove the snapshot volume reliably:  it shows up as open even though nothing should be using it.  There are monitoring messages in dmesg, but disabling monitoring makes no difference.

Version-Release number of selected component (if applicable):
lvm2-2.02.83-3.el6.x86_64

How reproducible:
It doesn't happen on every lvremove, but often enough that I cannot put the script into cron.


Steps to Reproduce:
1. run "online_fsck -a -v" script
2. observe process -- when destroying snapshot volume it may fail
3. even manually trying to remove it will fail, although after time it does allow removal.  various combinations of vgscan, lvscan, starting or stopping lvm2-monitor and sync hasn't improved or degraded the situation.
  
Actual results:

logical volume 'stuck' in open state.

Expected results:

removal of snapshot volumes

Additional info:

I've attached my script that triggers it, but I suspect it could be triggered by just entering the commands by hand.  This is the script I'm using on my servers, but I don't make any promises that it won't mess up your machine or run over your pet goldfish.

The core commands it is using are:

lvcreate -s -l 5%ORIGIN -n $lv-snap $vg/$lv # create lv snapshot
(nice e2fsck -p $FSCK_OPTS /dev/$filesystem-snap) && e2fsck -fy -Fttv -C0 /dev/$filesystem-snap  # the filesystem check
tune2fs -C0 -T now /dev/$filesystem   # if it has an ok fsck
lvremove -f $vg/$lv-snap # this is what fails

Comment 2 Peter Rajnoha 2011-08-03 10:35:11 UTC
Do you have udisks package installed? If yes, then this is another instance of the bug #570359.

We already have a fix for that, but it only works for intenal device opens within an LVM command. This fix should appear in next lvm2 build that is targeted for RHEL 6.2.

But if you open a device for read-write, closing it afterwards and then immediately run lvremove on that, you can get into a race because there's an open from within udev rules based on the watch rule. We have no way to synchronize with such events directly. What you can do to save the situation is to use "udevadm settle" in between such commands (this could have a perfomance impact since with udevadm settle you wait for *all* devices being processed in the system to settle down, not just the one you're processing).

Comment 3 Peter Rajnoha 2011-08-04 13:43:05 UTC
Is it the issue with udisks as mentioned in that bug #570359? You can test it quickly as noted in https://bugzilla.redhat.com/show_bug.cgi?id=570359#c10.

Comment 4 Brian Wheeler 2011-08-05 14:21:28 UTC
I'm testing it now.  I commented the line per bug 570395 comment 10 and issued a udevadm control --reload-rules.  I also added a "udevadm settle" just before the lvremove.

It seems to have fixed it, so this is probably a dupe.

Comment 5 Peter Rajnoha 2011-08-05 17:21:30 UTC
(In reply to comment #4)
> It seems to have fixed it, so this is probably a dupe.

Thanks for trying it out. So I'll close this one.

*** This bug has been marked as a duplicate of bug 570395 ***

Comment 6 Peter Rajnoha 2011-08-05 17:24:04 UTC

*** This bug has been marked as a duplicate of bug 570359 ***


Note You need to log in before you can comment on or make changes to this bug.