Bug 1561758

Summary: Failed to delete RBD image with error - "cannot obtain exclusive lock - not removing" and no watchers in rbd status
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RBD-MirrorAssignee: Jason Dillaman <jdillama>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact: Bara Ancincova <bancinco>
Priority: low    
Version: 3.0CC: anharris, ceph-eng-bugs, ceph-qe-bugs, edonnell, jdillama, kdreyer, rperiyas, vashastr
Target Milestone: z1   
Target Release: 3.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.5-47.el7cp Ubuntu: ceph_12.2.5-32redhat1 Doc Type: Bug Fix
Doc Text:
.RBD images can now be removed even if the optional journal is missing or corrupt If the RBD journaling feature is enabled, a missing journal prevents the image from being opened in an attempt to prevent possible data corruption. This safety feature also prevented an image from being removed if its journal was unavailable. Previously, the journaling feature had to be disabled before attempting to remove the image if this situation occurred. With this update, RBD image removal skips any attempt to open the journal because its integrity is not important when the image is being deleted. As a result, RBD images can now be removed even if the optional journal is missing or corrupt.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-09 00:59:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1584264    

Description Vikhyat Umrao 2018-03-28 20:14:59 UTC
Description of problem:
Failed to delete RBD image with error - "cannot obtain exclusive lock - not removing Removing image: 0% complete...failed. rbd: error: image still has watchers" and no watchers in rbd status command.

Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 3


How reproducible:
Always at the customer site.

Comment 19 errata-xmlrpc 2018-11-09 00:59:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3530