Bug 965369 - pool type "fs" creation not possible due to 'mount point' error
pool type "fs" creation not possible due to 'mount point' error
Status: CLOSED NOTABUG
Product: Virtualization Tools
Classification: Community
Component: virt-manager (Show other bugs)
unspecified
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Cole Robinson
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-20 23:43 EDT by roland
Modified: 2013-08-28 10:52 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-08-28 10:52:17 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description roland 2013-05-20 23:43:21 EDT
Description of problem:
When creating a new storage pool for type "fs" to allow a VM to access a disk partition with an existing installation in it, an error occurs stating that the "mount point .... does not exist".  The documentation says libvirt will create it if it doesn't exist, but even if it is created separately, it still generates the same error.


Version-Release number of selected component (if applicable):
libvir 0.8.3

How reproducible:

Create a new storage pool in virt manager.

Steps to Reproduce:

1. In virt-manager, select "edit | host details | storage (tab)".  
2. Add storage pool by clicking the "+" icon.
3. New dialog opens.
   Name: test_fs
   Type: fs: Pre-formatted block device
4. click forward
   Target path: var/lib/libvirt/images/test_fs
   Format: Auto
   Host Name: (greyed out)
   Source path: /dev/sda6
   Build pool: (greyed out)
5. click finish

Actual results:

Error is displayed
Error creating pool: Could not start storage pool: cannot open volume '/var/lib/libvirt/images/test_fs/initrd.img': No such file or directory

Expected results:
Storage pool to be defined and usable.

Additional info:

Below the error this traceback info is shown:
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/createpool.py", line 432, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/pymodules/python2.6/virtinst/Storage.py", line 477, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: cannot open volume '/var/lib/libvirt/images/test_fs/initrd.img': No such file or directory

syslog shows the following messages:

May 21 05:38:08 zambas libvirtd: 05:38:08.473: error : storagePoolLookupByName:299 : Storage pool not found: no pool with matching name 'test1'
May 21 05:38:46 zambas libvirtd: 05:38:46.166: error : qemudDomainGetVcpus:6044 : Requested operation is not valid: cannot list vcpu pinning for an inactive domain
May 21 05:38:46 zambas libvirtd: 05:38:46.174: error : qemudDomainGetVcpus:6044 : Requested operation is not valid: cannot list vcpu pinning for an inactive domain
May 21 05:39:06 zambas libvirtd: 05:39:06.268: error : storagePoolLookupByName:299 : Storage pool not found: no pool with matching name 'test1'
May 21 05:39:42 zambas libvirtd: 05:39:42.133: warning : qemudDispatchSignalEvent:396 : Shutting down on signal 15
May 21 05:39:48 zambas libvirtd: 05:39:48.590: warning : networkAddIptablesRules:850 : Could not add rule to fixup DHCP response checksums on network 'routed'.
May 21 05:39:48 zambas libvirtd: 05:39:48.590: warning : networkAddIptablesRules:851 : May need to update iptables package & kernel to support CHECKSUM rule.
May 21 05:39:49 zambas libvirtd: 05:39:49.189: warning : qemudStartup:1832 : Unable to create cgroup for driver: No such device or address
May 21 05:39:49 zambas libvirtd: 05:39:49.341: warning : lxcStartup:1895 : Unable to create cgroup for driver: No such device or address
May 21 05:40:07 zambas libvirtd: 05:40:07.082: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed
May 21 05:40:07 zambas libvirtd: 05:40:07.085: error : qemudDomainGetVcpus:6044 : Requested operation is not valid: cannot list vcpu pinning for an inactive domain
May 21 05:40:07 zambas libvirtd: 05:40:07.088: error : qemudDomainGetVcpus:6044 : Requested operation is not valid: cannot list vcpu pinning for an inactive domain
May 21 05:40:19 zambas libvirtd: 05:40:19.939: error : storagePoolLookupByName:299 : Storage pool not found: no pool with matching name 'test2'
May 21 05:41:04 zambas libvirtd: 05:41:04.292: error : storagePoolLookupByName:299 : Storage pool not found: no pool with matching name 'test1'
May 21 05:41:16 zambas libvirtd: 05:41:16.034: error : virStorageBackendVolOpenCheckMode:984 : cannot open volume '/var/lib/libvirt/images/test1/initrd.img': No such file or directory
Comment 1 Giuseppe Scrivano 2013-08-28 10:18:24 EDT
what version of virt-manager are you using (rpm -q virt-manager)?  The libvirt version specified (0.8.3) and that "Host Details" was renamed to "Connection Details" 5 years ago, suggest a quite old version.

The libvirtd error seems to be related to another volume "test1" and not to the volume that you are trying to create.  Do you get the same syslog error if you just try to browse the existing pool without adding a new volume?

This fix might be related:

https://www.redhat.com/archives/virt-tools-list/2012-January/msg00025.html
Comment 2 roland 2013-08-28 10:51:13 EDT
$sudo dpkg -s libvirt-bin
Version: 0.9.13-0ubuntu12.3

This was the version installed from Ubuntu 12.10 repositories.

I have since given up on the installations since the oldish processor doesn't support virtualisation and the windows guest is excruciatingly slow.

I suppose I can close this bug, since it's not relevant anymore.

Note You need to log in before you can comment on or make changes to this bug.