RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1242801 - Libvirtd restart does not remove transient pool when pool source is unavailable
Summary: Libvirtd restart does not remove transient pool when pool source is unavailable
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: yisun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-14 08:25 UTC by Yang Yang
Modified: 2018-04-10 10:35 UTC (History)
8 users (show)

Fixed In Version: libvirt-3.7.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 10:33:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0704 0 None None None 2018-04-10 10:35:51 UTC

Description Yang Yang 2015-07-14 08:25:31 UTC
Description of problem:
Libvirtd restart does not remove transient pool when pool source is unavailable.
It causes the pool can not be removed any more unless do a pool-define with restored source. 

Version-Release number of selected component (if applicable):
libvirt-1.2.17-2.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. prepare a disk
# iscsiadm --mode node --login --targetname iqn.2015-07.com.virttest:disk-pool.target --portal 127.0.0.1
Logging in to [iface: default, target: iqn.2015-07.com.virttest:disk-pool.target, portal: 127.0.0.1,3260] (multiple)
Login to [iface: default, target: iqn.2015-07.com.virttest:disk-pool.target, portal: 127.0.0.1,3260] successful.

#iscsiadm -m session -P 3
iscsi device: /dev/sdb

2. create pool

# cat disk.xml
<pool type='disk'>
  <name>snap_disk_pool</name>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/sdb'/>
    <format type='unknown'/>
  </source>
  <target>
    <path>/dev</path>
  </target>
</pool>

# virsh pool-create disk.xml
Pool snap_disk_pool created from disk.xml

# virsh pool-list --all --transient
 Name                 State      Autostart
-------------------------------------------
 snap_disk_pool       active     no        


3. remove the disk
# iscsiadm -m node -U all
Logging out of session [sid: 291, target: iqn.2015-07.com.virttest:disk-pool.target, portal: 127.0.0.1,3260]
Logout of [sid: 291, target: iqn.2015-07.com.virttest:disk-pool.target, portal: 127.0.0.1,3260] successful.

# lsblk | grep sdb

4. restart libvirtd

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh pool-list --all --transient
 Name                 State      Autostart
-------------------------------------------
 snap_disk_pool       inactive   no        

5. undefine pool
# virsh pool-undefine snap_disk_pool
error: Failed to undefine pool snap_disk_pool
error: internal error: no config file for snap_disk_pool

Actual results:


Expected results:
Transient pool should be removed when source is unavailable while libvirtd restart

Additional info:

Comment 3 lijuan men 2017-03-22 05:27:13 UTC
there is another scenario:

I create a transient gluster pool. When I restart the libvirtd,the pool will be the inactive status,but I can **restart** the pool.Is this the same issue as this bug?

version:
libvirt-3.1.0-2.el7
qemu-kvm-rhev-2.8.0-6.el7.x86_64

steps:
1.create a transient gluster pool with the xml:
<pool type='gluster'>
<name>gluster</name>
<source>
<host name='10.66.70.107'/>
<name>test</name>
<dir path='/'/>
</source>
</pool>

# virsh pool-create gluster-pool.xml

# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 Downloads            active     yes       
 gluster              active     no    

2.restart libvirtd
# systemctl restart libvirtd

2.check the status of the gluster pool
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 Downloads            active     yes       
 gluster              inactive   no        

****I can't undefine it****
# virsh pool-undefine gluster
error: Failed to undefine pool gluster
error: internal error: no config file for gluster

****But I can restart it****
# virsh pool-start gluster
Pool gluster started

# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 Downloads            active     yes       
 gluster              active     no        

NOTE:
If I define and start a ***persistent**** gluster ,restart the libvirtd,the gluster pool will also be inactive status.

Is this the same issue as this bug?

Comment 4 Erik Skultety 2017-03-22 10:16:26 UTC
Yes, it's the same issue, transient object should only exist in active/running state, so the fact that it appears in inactive state is wrong and contradicts the definition of a transient object. Additionally, the fact that you're able to re-start it only supports the statement in my previous sentence, since starting an inactive pool means there is a persistent config backing the pool.

Comment 5 Erik Skultety 2017-03-22 10:19:04 UTC
> NOTE:
> If I define and start a ***persistent**** gluster ,restart the libvirtd,the
> gluster pool will also be inactive status.

Not sure what you mean, let's get clear, did you define the pool as persistent, then removed the source and then restarted the daemon or am I missing something?

Comment 6 lijuan men 2017-03-23 02:31:00 UTC
(In reply to Erik Skultety from comment #5)
> > NOTE:
> > If I define and start a ***persistent**** gluster ,restart the libvirtd,the
> > gluster pool will also be inactive status.
> 
> Not sure what you mean, let's get clear, did you define the pool as
> persistent, then removed the source and then restarted the daemon or am I
> missing something?

I am sorry I didn't say it clearly.

the detailed scenario is:
1.I define a persistent gluster pool,start the pool and ensure the pool is active
2.restart the libvirtd(**not** remove the source,only restart the libvirtd)
# systemctl restart libvirtd

3.the pool will be inactive

Comment 7 Erik Skultety 2017-03-23 07:33:03 UTC
> the detailed scenario is:
> 1.I define a persistent gluster pool,start the pool and ensure the pool is
> active
> 2.restart the libvirtd(**not** remove the source,only restart the libvirtd)
> # systemctl restart libvirtd
> 
> 3.the pool will be inactive

In that case, ^^this scenario is clearly not related to this BZ, rather it's a gluster issue, so I'd suggest creating a separate BZ for this in libvirt and we'll investigate if it indeed is a libvirt issue.

Comment 8 lijuan men 2017-03-27 05:20:57 UTC
(In reply to Erik Skultety from comment #7)
> > the detailed scenario is:
> > 1.I define a persistent gluster pool,start the pool and ensure the pool is
> > active
> > 2.restart the libvirtd(**not** remove the source,only restart the libvirtd)
> > # systemctl restart libvirtd
> > 
> > 3.the pool will be inactive
> 
> In that case, ^^this scenario is clearly not related to this BZ, rather it's
> a gluster issue, so I'd suggest creating a separate BZ for this in libvirt
> and we'll investigate if it indeed is a libvirt issue.

ok,I filed another bug for it

https://bugzilla.redhat.com/show_bug.cgi?id=1436065

thanks~

Comment 9 Peter Krempa 2017-03-30 12:03:47 UTC
I've noticed this issue while fixing the bug mentioned in comment 8. I've posted patches fixing it:

https://www.redhat.com/archives/libvir-list/2017-March/msg01572.html

Comment 10 Peter Krempa 2017-04-03 06:45:41 UTC
commit f3a8e80c130513c2b488df5a561c788133148685
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 30 13:47:45 2017 +0200

    storage: driver: Remove unavailable transient pools after restart
    
    If a transient storage pool is deemed inactive after libvirtd restart it
    would not be deleted from the list. Reuse virStoragePoolUpdateInactive
    along with a refactor necessary to properly update the state.

Comment 12 yisun 2017-11-24 12:53:19 UTC
Test with libvirt-3.9.0-2.el7.x86_64 and PASSED

## iscsiadm --mode node --targetname iqn.2016-03.com.virttest:logical-pool.target --portal 10.66.5.64:3260 --login
Logging in to [iface: default, target: iqn.2016-03.com.virttest:logical-pool.target, portal: 10.66.5.64,3260] (multiple)
Login to [iface: default, target: iqn.2016-03.com.virttest:logical-pool.target, portal: 10.66.5.64,3260] successful.

## lsscsi
...
[8:0:0:0]    disk    LIO-ORG  device.logical-  4.0   /dev/sdc 

## cat pool.xml
<pool type='disk'>
  <name>test_pool</name>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/sdc'/>
    <format type='unknown'/>
  </source>
  <target>
    <path>/dev</path>
  </target>
</pool>

## virsh pool-create pool.xml
Pool test_pool created from pool.xml


## virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
...     
 test_pool            active     no        



## iscsiadm --mode node --targetname iqn.2016-03.com.virttest:logical-pool.target --portal 10.66.5.64:3260 --logout
Logging out of session [sid: 2, target: iqn.2016-03.com.virttest:logical-pool.target, portal: 10.66.5.64,3260]
Logout of [sid: 2, target: iqn.2016-03.com.virttest:logical-pool.target, portal: 10.66.5.64,3260] successful.

## lsscsi | grep sdc; echo $?
1

## service libvirtd restart
Redirecting to /bin/systemctl restart libvirtd.service

## virsh pool-list --all | grep test_pool; echo $?
1

Comment 16 errata-xmlrpc 2018-04-10 10:33:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0704


Note You need to log in before you can comment on or make changes to this bug.