RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 589942 - Virt-manager: The Storage autostart is unchecked , But the status is still "active"
Summary: Virt-manager: The Storage autostart is unchecked , But the status is still "a...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: virt-manager
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Cole Robinson
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-05-07 11:03 UTC by Jianjiao Sun
Modified: 2010-06-02 13:45 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-06-02 13:45:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jianjiao Sun 2010-05-07 11:03:40 UTC
Description of problem:
I unchecked the autostart of storage, and reboot my guest.I didn'g get the status "inactive" 


Version-Release number of selected component (if applicable):
[unanao@dhcp-65-37 ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.0 Beta (Santiago) 

[unanao@dhcp-65-37 ~]$ rpm -qa |grep libvirt
libvirt-python-0.8.0-4.el6.x86_64
libvirt-client-0.8.0-4.el6.x86_64
libvirt-0.8.0-4.el6.x86_64

[unanao@dhcp-65-37 ~]$ rpm -q virt-manager
virt-manager-0.8.4-1.el6.noarch



How reproducible:
always

Steps to Reproduce:

There are at least 2 storage pools are available.
step:
1, Launch virt-manager
2, Open host details window: ?Make the connect selected, then click "Edit ->Host Details"), Go to "Storage" tab.
3, Click the first Storage, uncheck the check box beside "Auto start", click "Apply" if status changed.Click the second storage, make sure the check box selected, click "Apply".
4, reboot the host.
5, Launch virt-manager and open host detail window after reboot have succeed.
  
Actual results:
[root@dhcp-65-37 yum.repos.d]# virsh pool-list --all
Name                 State      Autostart 
-----------------------------------------
default              active     no        
new_poll             active     yes 

Expected results:
The first storage should not have running and the second should running at step 5.

Additional info:
My system is a cleaned system,except for we need.

Comment 2 RHEL Program Management 2010-05-07 12:58:58 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 3 Cole Robinson 2010-05-11 18:21:03 UTC
In your description, you say you reboot your guest. In the steps to reproduce, you say you reboot your host machine. Can you please clarify?

Following your steps to reproduce, if I turn off autostart for pool 'default' and do a 'service libvirtd restart', pool-list --all shows that the pool is not running, so I cannot reproduce this issue. Can you update to latest libvirt and retry? If you can reproduce with virt-manager, can you reproduce using the virsh pool-autostart --disable command?

Comment 4 Jianjiao Sun 2010-06-02 01:30:56 UTC
Sorry!Replied so late,Because (In reply to comment #3)
> In your description, you say you reboot your guest. In the steps to reproduce,
> you say you reboot your host machine. Can you please clarify?
> 
> Following your steps to reproduce, if I turn off autostart for pool 'default'
> and do a 'service libvirtd restart', pool-list --all shows that the pool is not
> running, so I cannot reproduce this issue. Can you update to latest libvirt and
> retry? If you can reproduce with virt-manager, can you reproduce using the
> virsh pool-autostart --disable command?    

Thanks for your kindly reply!
Sorry!I replied so late,after I reported the bug ,I went back to school a few days,and we came back,other work surrounded me.
I test "disable the pool autostart" through 'pool-autostart' virsh command and 'virt-manager',both of them works well on 'libvirt-0.8.1-7.el6.x86_64'.Maybe I make a mistake ,and forgot to reboot my host!
Thanks for your kindly reply again!

Comment 5 Cole Robinson 2010-06-02 13:45:29 UTC
Thanks, closing as WORKSFORME


Note You need to log in before you can comment on or make changes to this bug.