Bug 983539
| Summary: | Libvirt should stop starting the fs and netfs pool using inexistent/unreachable source device | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Ján Tomko <jtomko> |
| Component: | libvirt | Assignee: | Ján Tomko <jtomko> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 6.4 | CC: | acathrow, bili, cwei, dyuan, jiahu, jtomko, mzhan |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-0.10.2-20.el6 | Doc Type: | Bug Fix |
| Doc Text: |
Cause: The function virStorageBackendFileSystemMount returned success even if the mount command failed.
Consequence: Libvirt showed the pool as running even though it was unusable.
Fix: Return an error if the mount command failed.
Result: Libvirt no longer says starting of the pool with an unreachable source device succeeded.
|
Story Points: | --- |
| Clone Of: | 981251 | Environment: | |
| Last Closed: | 2013-11-21 09:05:07 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 981251 | ||
| Bug Blocks: | |||
|
Description
Ján Tomko
2013-07-11 12:01:18 UTC
The bug can not reproduce on libvirt-0.10.2-20.el6.x86_64.
1. Start these pools.
[root@test ~]# virsh pool-list --all --details
Name State Autostart Persistent Capacity Allocation Available
--------------------------------------------------------------------------------
default running yes yes 147.65 GiB 12.97 GiB 134.68 GiB
fs_pool_sdb1 running no yes 485.45 MiB 10.30 MiB 475.16 MiB
nfs_pool running no yes 49.09 GiB 3.38 GiB 45.71 GiB
[root@test ~]#
[root@test ~]# virsh pool-dumpxml fs_pool_sdb1
<pool type='fs'>
<name>fs_pool_sdb1</name>
<uuid>9a28db7c-de6e-752c-d70a-0ef0a215eebe</uuid>
<capacity unit='bytes'>509035520</capacity>
<allocation unit='bytes'>10796032</allocation>
<available unit='bytes'>498239488</available>
<source>
<device path='/dev/sdb1'/>
<format type='ext3'/>
</source>
<target>
<path>/machine/sdb1</path>
<permissions>
<mode>0700</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
[root@test ~]# virsh pool-dumpxml nfs_pool
<pool type='netfs'>
<name>nfs_pool</name>
<uuid>1b4a7db0-e755-55df-e690-cebfe07c8ac2</uuid>
<capacity unit='bytes'>52710866944</capacity>
<allocation unit='bytes'>3634364416</allocation>
<available unit='bytes'>49076502528</available>
<source>
<host name='*.*.*.*'/>
<dir path='/libvirt_nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/nfs_pool</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
2. Destroy these pools and block the source of pools from host machine.
[root@test ~]# virsh pool-list --all --details
Name State Autostart Persistent Capacity Allocation Available
--------------------------------------------------------------------------------
default running yes yes 147.65 GiB 12.97 GiB 134.68 GiB
fs_pool_sdb1 running no yes 485.45 MiB 10.30 MiB 475.16 MiB
nfs_pool running no yes 49.09 GiB 3.38 GiB 45.71 GiB
[root@test ~]# virsh pool-destroy fs_pool_sdb1
Pool fs_pool_sdb1 destroyed
[root@test ~]# virsh pool-destroy nfs_pool
Pool nfs_pool destroyed
[root@test ~]# parted /dev/sdb print
Error: Could not stat device /dev/sdb - No such file or directory.
Retry/Cancel? ^C
[root@test ~]# showmount -e *.*.*.*
Export list for 10.66.100.107:
/libvirt_nfs *
[root@test ~]# showmount -e *.*.*.*
clnt_create: RPC: Program not registered
[root@test ~]# virsh pool-start fs_pool_sdb1
error: Failed to start pool fs_pool_sdb1
error: internal error Child process (/bin/mount -t ext3 /dev/sdb1 /machine/sdb1) unexpected exit status 32: mount: special device /dev/sdb1 does not exist
[root@test ~]# virsh pool-start nfs_pool
error: Failed to start pool nfs_pool
error: internal error Child process (/bin/mount -t nfs 10.66.100.107:/libvirt_nfs /nfs_pool) unexpected exit status 32: mount.nfs: Connection timed out
3. Resume the source of pools, and restart these pools, they are started normally.
[root@test ~]# parted /dev/sdb print
Model: Generic- SD/MMC (scsi)
Disk /dev/sdb: 1967MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 527MB 526MB primary ext3
[root@test ~]# showmount -e *.*.*.*
Export list for *.*.*.*:
/libvirt_nfs *
[root@test ~]# virsh pool-start fs_pool_sdb1
Pool fs_pool_sdb1 started
[root@test ~]# virsh pool-start nfs_pool
Pool nfs_pool started
We can get expected results, changed it to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1581.html |