Bug 1072141
| Summary: | "pool-list --type gluster" list other types pool | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | chhu |
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.0 | CC: | ajia, dyuan, mzhan, pkrempa, pzhang, rbalakri, shyu, xuzhang |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-1.2.7-1.el7 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-03-05 07:30:53 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Move to rhel7.1.0 since it's not a blocker. This is already fixed upstream:
commit f336b1cccb2bd92ae038b48da5fc735ec0a68f40
Author: Christophe Fergeau <cfergeau>
Date: Thu Feb 6 16:12:14 2014 +0100
Add glusterfs to VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_POOL_TYPE
If it's not present in this list, we won't be able to get only
glusterfs pools when using virConnectListAllStoragePools.
version:
kernel-3.10.0-203.el7.x86_64
qemu-kvm-rhev-2.1.2-8.el7.x86_64
libvirt-1.2.8-6.el7.x86_64
steps:
1.define and start a gluster pool
# virsh pool-dumpxml gluster-pool
<pool type='gluster'>
<name>gluster-pool</name>
<uuid>35a081e7-deb8-4255-99fe-3ac09876b7da</uuid>
<capacity unit='bytes'>340698349568</capacity>
<allocation unit='bytes'>83485872128</allocation>
<available unit='bytes'>257212477440</available>
<source>
<host name='10.66.5.165'/>
<dir path='/'/>
<name>gluster-vol1</name>
</source>
</pool>
# virsh pool-list --all
Name State Autostart
-------------------------------------------
default active yes
dir-pool active no
disk-pool active no
gluster-pool active no
logical-pool active no
mylgpool active no
2.list inactive gluster pool
pool is inactive :
# virsh pool-list --inactive gluster
Name State Autostart
-------------------------------------------
gluster-pool inactive no
list pool details :
# virsh pool-list gluster --details --all
Name State Autostart Persistent Capacity Allocation Available
--------------------------------------------------------------------------------
gluster-pool inactive no yes - - -
pool is persistent :
# virsh pool-list --inactive --transient gluster
Name State Autostart
-------------------------------------------
# virsh pool-list --inactive --persistent gluster
Name State Autostart
-------------------------------------------
gluster-pool inactive no
pool is no-autostart :
# virsh pool-list --inactive --autostart gluster
Name State Autostart
-------------------------------------------
# virsh pool-list --inactive -no-autostart gluster
Name State Autostart
-------------------------------------------
gluster-pool inactive no
3.gluster pool is active
# virsh pool-list gluster
Name State Autostart
-------------------------------------------
gluster-pool active yes
# virsh pool-list gluster --inactive
Name State Autostart
-------------------------------------------
# virsh pool-list gluster --details
Name State Autostart Persistent Capacity Allocation Available
----------------------------------------------------------------------------------
gluster-pool running yes yes 317.30 GiB 77.75 GiB 239.55 GiB
pool is autostart :
# virsh pool-list gluster --autostart
Name State Autostart
-------------------------------------------
gluster-pool active yes
# virsh pool-list gluster --no-autostart
Name State Autostart
-------------------------------------------
pool is persistent :
# virsh pool-list gluster --persistent
Name State Autostart
-------------------------------------------
gluster-pool active yes
# virsh pool-list gluster --transient
Name State Autostart
-------------------------------------------
4.list gluster pool and other types pool at the same time
list gluster pool with exist pools :
#virsh pool-list gluster,disk,logical
name State Autostart
-------------------------------------------
disk-pool active no
gluster-pool active no
logical-pool active no
mylgpool active no
#virsh pool-list gluster,disk --details
Name State Autostart Persistent Capacity Allocation Available
----------------------------------------------------------------------------------
disk-pool running no yes 14.91 GiB 14.00 GiB 928.86 MiB
gluster-pool running no yes 317.30 GiB 77.75 GiB 239.55 GiB
list gluster pool with no-exist pool:
# virsh pool-list gluster,iscsi
Name State Autostart
-------------------------------------------
gluster-pool active no
# virsh pool-list gluster,zfs,sheepdog --details
Name State Autostart Persistent Capacity Allocation Available
----------------------------------------------------------------------------------
gluster-pool running no yes 317.30 GiB 77.75 GiB 239.55 GiB
5.create a transient gluster pool
# virsh pool-create gluster-pool.xml
Pool gluster-pool created from gluster-pool.xml
# virsh pool-list gluster --all
Name State Autostart
-------------------------------------------
gluster-pool active no
# virsh pool-list gluster --transient --details
Name State Autostart Persistent Capacity Allocation Available
----------------------------------------------------------------------------------
gluster-pool running no no 317.30 GiB 77.75 GiB 239.55 GiB
6.list gluster pool after change gluster pool states
destroy gluster pool then list :
# virsh pool-destroy gluster-pool
Pool gluster-pool destroyed
# virsh pool-list gluster --all
Name State Autostart
-------------------------------------------
gluster-pool inactive yes
start gluster pool ,disable autostart then list :
# virsh pool-start gluster-pool
Pool gluster-pool started
# virsh pool-autostart --disable gluster-pool
Pool gluster-pool unmarked as autostarted
# virsh pool-list gluster --details
Name State Autostart Persistent Capacity Allocation Available
----------------------------------------------------------------------------------
gluster-pool running no yes 317.30 GiB 77.75 GiB 239.55 GiB
pool-list --type gluster , list gluster pool
move to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0323.html |
Description of problem: "pool-list --type gluster" list other types pool Version-Release number of selected component (if applicable): libvirt-1.1.1-25.el7.x86_64 qemu-kvm-rhev-1.5.3-50.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. define and start a gluster pool # more pool-gluster.xml <pool type='gluster'> <source> <host name='10.66.84.12'/> <name>gluster-vol1</name> <dir path='/'/> </source> <name>gluster-vol1</name> </pool> # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes gluster-vol1 active no 2. run pool-list --type gluster, list other types of pool # virsh pool-list --type dir Name State Autostart ----------------------------------------- default active yes # virsh pool-list --type gluster Name State Autostart ----------------------------------------- default active yes gluster-vol1 active no Expected results: In step2: pool-list --type gluster, only list the gluster pool Actual results: In step2: pool-list --type gluster, list the dir pool