Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
The error info is not accurate when do vol-wipe with volume based on gluster pool
Version-Release number of selected component (if applicable):
libvirt-1.1.1-29.el7.x86_64
How reproducible:
100%
Steps to Reproduce:
1. Prepare an gluster type pool
# virsh pool-list gluster
Name State Autostart
-------------------------------------------
pool-gluster active no
Actual results:
2. Do wipe on volume in gluster pool
# virsh vol-list pool-gluster
Name Path
------------------------------------------------------------------------------
test.raw gluster://10.66.4.135/gluster-vol1/test.raw
# virsh vol-wipe --pool pool-gluster test.raw
error: Failed to wipe vol test.raw
error: Failed to open storage volume with path 'gluster://gluster server/gluster-vol1/test.raw': No such file or directory
3.
# qemu-img info gluster://gluster server/gluster-vol1/test.raw
image: gluster://gluster server/gluster-vol1/test.raw
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: 0
Expected results:
Additional info:
Since vol-wipe is not supported on non-local volume, so libvirt should give more clear info about that.
Fixed in v1.2.6-181-g11d2805
commit 11d28050c58bc44cc2bbb736468e553a3a322409
Author: Peter Krempa <pkrempa>
Date: Mon Jul 7 16:50:11 2014 +0200
storage: Split out volume wiping as separate backend function
For non-local storage drivers we can't expect to use the "scrub" tool to
wipe the volume. Split the code into a separate backend function so that
we can add protocol specific code later.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1118710
commit 4d799b65cd193b2e23701b0cf548b85fdd498bcd
Author: Peter Krempa <pkrempa>
Date: Mon Jul 7 15:41:33 2014 +0200
storage: wipe: Move helper code into storage backend
The next patch will move the storage volume wiping code into the
individual backends. This patch splits out the common code to wipe a
local volume into a separate backend helper so that the next patch is
simpler.
Verify this bug with the following package version:
libvirt-1.2.8-11.el7.x86_64
qemu-img-rhev-2.1.2-17.el7.x86_64
kernel-3.10.0-219.el7.x86_64
Steps:
1. prepare one gluster pool, there are some volumes in the pool
# virsh pool-list
Name State Autostart
-------------------------------------------
default active yes
gluster-pool active no
# virsh vol-list gluster-pool
Name Path
------------------------------------------------------------------------------
test.qcow2 gluster://#glusterServerIP#/gluster-vol1/test.qcow2
test.raw gluster://#glusterServerIP#/gluster-vol1/test.raw
2. vol-wipe the volume of gluster pool will fail with expected error message.
# virsh vol-wipe test.qcow2 --pool gluster-poolerror: Failed to wipe vol test.qcow2
error: this function is not supported by the connection driver: storage pool doesn't support volume wiping
Change the bug status to verify.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHSA-2015-0323.html