RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1663793 - [machines] The state of storage pool can't be updated immediately when the pool is destroyed or undefined
Summary: [machines] The state of storage pool can't be updated immediately when the po...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: cockpit-appstream
Version: 8.0
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: rc
: 8.1
Assignee: Katerina Koukiou
QA Contact: YunmingYang
URL:
Whiteboard:
Depends On: 1678935
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-07 04:51 UTC by Qin Yuan
Modified: 2020-11-14 09:09 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-05 20:41:35 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3325 0 None None None 2019-11-05 20:41:46 UTC

Description Qin Yuan 2019-01-07 04:51:44 UTC
Description of problem:
For system storage pool:
1) delete it via cmd `# virsh pool-undefine $poolname` on host, it doesn't disappear immediately on web UI, refreshing web UI could make it disappear.

For session storage pool:
1) stop it via cmd `$ virsh pool-destroy $poolname` on host, it still shows active on web UI, refreshing or re-login cockpit couldn't make the state change to inactive.
2) delete it via cmd `$ virsh pool-undefine $poolname` on host, the pool doesn't disappear immediately on web UI, refreshing web can't make it disappear, but re-login cockpit could.


Version-Release number of selected component (if applicable):
cockpit-machines-184-1.el8.noarch
libvirt-dbus-1.2.0-1.module+el8+2529+a9686a4d.x86_64


How reproducible:
100%


Steps to Reproduce:
1. Login to cockpit with a user in libvirt group, such as mycount.
2. Open storage pools sub page of virtual machines,
    1) create a system storage pool
        name: syspool
        type: Filesystem Directory
        target: /var/lib/libvirt/syspool
    2) create a session storage pool:
        name: userpool
        type: Filesystem Directory
        target: /home/mycount/userpool
3. Stop the two storage pools by running `virsh -c qemu:///system pool-destroy syspool` and `virsh pool-destroy userpool` on host with the user in step1. Then observe pool status on web UI.
4. Delete the two storage pools by running `virsh -c qemu:///system pool-undefine $systempool` and `virsh pool-undefine $sessionpool` on host with the user in step1. Then observe whether the pools could disappear on web UI.


Actual results:
1. After step3 that stop the two pools on host by cmds, the system pool is inactive on UI, while the session pool is still active on UI.
2. After step4 that delete the two pools on host by cmds, the system pool and session pool are both still present on UI.


Expected results:
1. After step3, the session pool status on UI could change to inactive immediately.
2. After step4, the system pool and session pool could disappear on UI immediately.


Additional info:

Comment 1 Katerina Koukiou 2019-02-08 10:48:28 UTC
Some storage pool event handling was implemented here:

commit 12568f7c882bd3da1640d459d296934c80880779
Author: Katerina Koukiou <kkoukiou>

    machines: Add event handler for storage pools events

And specifically the event haldler for undefining pools was implemented here:

commit edcfa9933c82756546c78da9d1d6489fb8f1f5d6
Author: Katerina Koukiou <kkoukiou>

    machines: implement delete operation for storage pools

Both are inside release 186.

Comment 3 YunmingYang 2019-05-29 10:44:45 UTC
Test Versions:
libvirt-dbus-1.2.0-2.module+el8.1.0+2983+b2ae9c0a.x86_64
cockpit-machines-193-1.el8.noarch

Test Steps:
1.Login cockpit with root, and stop the default pool with command which is 'virsh pool-destroy default'
2.Undefine the default pool with command which is 'virsh pool-undefine default' 


Test Results:
1.The state of the storage pool on the page was changed to 'inactive' after the operations in step 1
2.The storage pool disappeared after the operations in step 2


According to the results, the state of storage pool can be updated immediately when the pool is destroyed or undefined. So move the status to VERIFIED.

Comment 5 errata-xmlrpc 2019-11-05 20:41:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3325


Note You need to log in before you can comment on or make changes to this bug.