Bug 1118710 - The error info is not accurate when do vol-wipe with volume based on gluster pool
Summary: The error info is not accurate when do vol-wipe with volume based on gluster ...
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
Depends On:
TreeView+ depends on / blocked
Reported: 2014-07-11 09:52 UTC by Shanzhi Yu
Modified: 2015-03-05 07:40 UTC (History)
6 users (show)

Fixed In Version: libvirt-1.2.7-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2015-03-05 07:40:54 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Shanzhi Yu 2014-07-11 09:52:29 UTC
Description of problem:

The error info is not accurate when do vol-wipe with volume based on gluster pool

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

1. Prepare an gluster type pool

# virsh pool-list gluster
  Name                 State      Autostart
  pool-gluster         active     no
Actual results:

2. Do wipe on volume in gluster pool

# virsh vol-list pool-gluster
  Name                 Path
   test.raw             gluster://

# virsh vol-wipe --pool pool-gluster test.raw
error: Failed to wipe vol test.raw
error: Failed to open storage volume with path 'gluster://gluster server/gluster-vol1/test.raw': No such file or directory

# qemu-img info gluster://gluster server/gluster-vol1/test.raw
image: gluster://gluster server/gluster-vol1/test.raw
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: 0

Expected results:

Additional info:

Since vol-wipe is not supported on non-local volume, so libvirt should give more clear info about that.

Comment 1 Peter Krempa 2014-07-17 08:18:50 UTC
Fixed in v1.2.6-181-g11d2805

commit 11d28050c58bc44cc2bbb736468e553a3a322409
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Mon Jul 7 16:50:11 2014 +0200

    storage: Split out volume wiping as separate backend function
    For non-local storage drivers we can't expect to use the "scrub" tool to
    wipe the volume. Split the code into a separate backend function so that
    we can add protocol specific code later.
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1118710

commit 4d799b65cd193b2e23701b0cf548b85fdd498bcd
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Mon Jul 7 15:41:33 2014 +0200

    storage: wipe: Move helper code into storage backend
    The next patch will move the storage volume wiping code into the
    individual backends. This patch splits out the common code to wipe a
    local volume into a separate backend helper so that the next patch is

Comment 3 Xuesong Zhang 2014-12-23 11:08:47 UTC
Verify this bug with the following package version:

1. prepare one gluster pool, there are some volumes in the pool
# virsh pool-list 
 Name                 State      Autostart 
 default              active     yes       
 gluster-pool         active     no        

# virsh vol-list gluster-pool
 Name                 Path                                    
 test.qcow2           gluster://#glusterServerIP#/gluster-vol1/test.qcow2
 test.raw             gluster://#glusterServerIP#/gluster-vol1/test.raw

2. vol-wipe the volume of gluster pool will fail with expected error message.
# virsh vol-wipe test.qcow2  --pool gluster-poolerror: Failed to wipe vol test.qcow2
error: this function is not supported by the connection driver: storage pool doesn't support volume wiping

Change the bug status to verify.

Comment 5 errata-xmlrpc 2015-03-05 07:40:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.