RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1085769 - [Stroage][vol-clone] Volume was cloned successfully when passing an non-existing pool
Summary: [Stroage][vol-clone] Volume was cloned successfully when passing an non-exist...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-04-09 10:03 UTC by Yang Yang
Modified: 2015-03-05 07:31 UTC (History)
7 users (show)

Fixed In Version: libvirt-1.2.7-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 07:31:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Yang Yang 2014-04-09 10:03:38 UTC
Description:
Tried to clone volume by running virsh cmd vol-clone by passing an non-existing pool and passing the full volume path. Finally an error output but cloned successfully.

Product Version:
libvirt-1.1.1-29.el7.x86_64
qemu-kvm-rhev-1.5.3-60.el7ev.x86_64

How producible:
Always

Steps:
1. Create a qcow3 format volume in default pool
# cat qcow3-vol2.xml
<volume>
  <name>qcow3-vol2</name>
  <source>
  </source>
  <capacity unit='bytes'>1024000000</capacity>
  <allocation unit='bytes'>204000</allocation>
  <target>
    <format type='qcow2'/>
    <compat>1.1</compat>
    <features>
    <lazy_refcounts/>
    </features>
  </target>
</volume>

# virsh vol-create default qcow3-vol2.xml
Vol qcow3-vol2 created from qcow3-vol2.xml

2. Clone the volume by passing an non-existing pool
# virsh vol-clone --pool net /var/lib/libvirt/images/qcow3-vol2 qcow3-vol2.bak

Actual results:
in step 2:
error: failed to get pool 'net'
Vol qcow3-vol2.bak cloned from qcow3-vol2

Expected results:
in step 2:
virsh command should quit and return with error like this:
error: failed to get pool 'net'
The volume clone should fail.

Comment 1 Ján Tomko 2014-04-09 11:38:55 UTC
Fixed upstream by:
commit a751e3452b0280df64a9a67a6c96a09e1045026e
Author:     Peter Krempa <pkrempa>
CommitDate: 2014-03-05 09:08:32 +0100

    virsh: volume: Fix lookup of volumes to provide better error messages
    
    If a user specifies the pool explicitly, we should make sure to point
    out that it's inactive instead of falling back to lookup by key/path and
    failing at the end. Also if the pool isn't found there's no use in
    continuing the lookup.
    
    This changes the error in case the user-selected pool is inactive from:
    
     $ virsh vol-upload --pool inactivepool --vol somevolname volcontents
     error: failed to get vol 'somevolname'
     error: Storage volume not found: no storage vol with matching path
     somevolname
    
    To a more descriptive:
    
     $ virsh vol-upload --pool inactivepool --vol somevolname volcontents
     error: pool 'inactivepool' is not active
    
    And in case a user specifies an invalid pool from:
    
     $ virsh vol-upload --pool invalidpool --vol somevolname volcontents
     error: failed to get pool 'invalidpool'
     error: failed to get vol 'somevolname', specifying --pool might help
     error: Storage volume not found: no storage vol with matching path somevolname
    
    To something less confusing:
    
     $ virsh vol-upload --pool invalidpool --vol somevolname volcontents
     error: failed to get pool 'invalidpool'
     error: Storage pool not found: no storage pool with matching name 'invalidpool'

git describe: v1.2.2-38-ga751e34 contains: v1.2.3-rc1~349

Comment 2 Yang Yang 2014-05-07 03:35:17 UTC
The same issue was hit by virsh cmd vol-resize, vol-wipe, vol-delete, vol-dumpxml, vol-info, vol-key, vol-path, vol-download.

# virsh vol-resize --pool xx /var/lib/libvirt/images/qcow3-vol2 6G
error: failed to get pool 'xx'
Size of volume 'raw' successfully changed to 6G

# virsh vol-wipe --pool xx /var/lib/libvirt/images/qcow3-vol2
error: failed to get pool 'xx'
Vol /var/lib/libvirt/images/qcow3-vol2 wiped

# virsh vol-delete --pool xx /var/lib/libvirt/images/qcow3-vol2
error: failed to get pool 'xx'
Vol /var/lib/libvirt/images/qcow3-vol2 deleted

Comment 4 Pei Zhang 2014-12-02 07:57:14 UTC
verify version:
libvirt-1.2.8-9.el7.x86_64
qemu-kvm-rhev-2.1.2-13.el7.x86_64
kernel-3.10.0-211.el7.x86_64

steps:
1.define a pool
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 dir-pool             inactive   no        

2.using absolute path  without specified pool 
# virsh vol-key /tmp/dir-pool/vol1.xml 
error: failed to get vol '/tmp/dir-pool/vol1.xml', specifying --pool might help
error: Storage volume not found: no storage vol with matching path '/tmp/dir-pool/vol1.xml'

3.try to execute command vol-XXX with specified pool which is inactive .  
# virsh vol-upload --pool dir-pool --vol vol1.xml upload 
error: pool 'dir-pool' is not active

# virsh vol-info /tmp/dir-pool/vol1.xml --pool dir-pool
error: pool 'dir-pool' is not active

# virsh vol-key /tmp/dir-pool/vol1.xml --pool dir-pool
error: pool 'dir-pool' is not active

4.start the pool , do vol-XXX successfully
# virsh pool-start dir-pool 
Pool dir-pool started
# virsh vol-list dir-pool
 Name                 Path                                    
------------------------------------------------------------------------------
 r7.img               /tmp/dir-pool/r7.img                            
 vol1.xml             /tmp/dir-pool/vol1.xml             

upload a local file named upload to a volume in the pool ,upload successfully

# virsh vol-upload --pool dir-pool --vol vol1.xml upload 

get vol-key with specified pool , get successfully.
# virsh vol-key /tmp/dir-pool/vol1.xml --pool dir-pool
/tmp/dir-pool/vol1.xml

5.using a invalid pool , fail to do vol-XXX

# virsh vol-resize --pool invalid-pool /tmp/dir-pool/vol1.xml 1G
error: failed to get pool 'invalid-pool'
error: Storage pool not found: no storage pool with matching name 'invalid-pool'

# virsh vol-download --pool invalid-pool --vol vol1.xml download
error: failed to get pool 'invalid-pool'
error: Storage pool not found: no storage pool with matching name 'invalid-pool'

# virsh vol-delete --pool XX /tmp/dir-pool/vol1.xml 
error: failed to get pool 'XX'
error: Storage pool not found: no storage pool with matching name 'XX'

Fail to do vol-XXX if the specified pool is inactive or invalid .
move to verified.

Comment 6 errata-xmlrpc 2015-03-05 07:31:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.