Bug 1323827 - Problem with clone machine on glusterfs storage
Summary: Problem with clone machine on glusterfs storage
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Cole Robinson
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-04 20:42 UTC by Viliam Tokarcik
Modified: 2019-06-15 18:00 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-15 18:00:24 UTC


Attachments (Terms of Use)

Description Viliam Tokarcik 2016-04-04 20:42:07 UTC
Description of problem:

Problem with clone virtual machine, which have attached disk image stored on gluster volume

Version-Release number of selected component (if applicable):
virt-manager 1.3.2

How reproducible:
+ configure new storage with type gluster and name gluster.

Steps to Reproduce:
1. Right click on machine
2. Select clone
3. Enter new name
4. Click storage in section 'Storage'
5. Check new image file location, for example: I have gluster://localhost/vol1/wm32.qcow2
6. Click Ok
7. Click Clone

Actual results:

Uncaught error validating input: [Errno 2] No such file or directory: 'gluster://localhost/vol1'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/clone.py", line 802, in finish
    if not self.validate():
  File "/usr/share/virt-manager/virtManager/clone.py", line 781, in validate
    cd.setup_clone()
  File "/usr/share/virt-manager/virtinst/cloner.py", line 408, in setup_clone
    self._setup_disk_clone_destination(orig_disk, clone_disk)
  File "/usr/share/virt-manager/virtinst/cloner.py", line 361, in _setup_disk_clone_destination
    clone_disk.validate()
  File "/usr/share/virt-manager/virtinst/devicedisk.py", line 848, in validate
    self._storage_backend.validate(self)
  File "/usr/share/virt-manager/virtinst/diskbackend.py", line 292, in validate
    err, msg = self.is_size_conflict()
  File "/usr/share/virt-manager/virtinst/diskbackend.py", line 329, in is_size_conflict
    vfs = os.statvfs(os.path.dirname(self._path))
OSError: [Errno 2] No such file or directory: 'gluster://localhost/vol1'


Expected results:

Expected is create disk image on gluster volume successfully


Additional info:

none

Comment 1 manous 2016-06-30 07:12:21 UTC
i have the same issue !

Comment 2 lejeczek 2016-12-14 10:32:52 UTC
also tools like:

$ virt-clone -o rhel-work1 -n rhel-work2 --file gluster://127.0.0.1/QEMU-VMs/rhel-work2.qcow2
$ virt-clone -o rhel-work1 --auto-clone

would be nice to have able of doing that. I presumes it's a "feature enhancement" request.

Comment 3 Meltro 2017-08-17 18:31:09 UTC
Some questions, how are you creating the initial image on the gluster volume? Are you able to create snapshots?

If you are connected via glusterfs-api, you probably will not be able to create new volumes using virsh, which precludes snapshots and cloning, as well as creating new images via virt-manager.

Myself I'm using qemi-img create via a FUSE mounting of the gluster volume, and that file can then be manipulated by qemu/kvm

Comment 4 Cole Robinson 2019-06-15 18:00:24 UTC
I no longer have a gluster setup for testing. However my understanding is that this will work correctly if the source VM gluster storage is part of a libvirt storage pool. virt-manager will detect the source storage pool and use that to request creation of a new storage volume within the same pool. If you are trying to clone local storage to a gluster volume, it won't work, because libvirt doesn't support cross pool cloning. If you are trying to use or create a gluster volume with libvirt storage pools involved, we have no way to know how to go about using/creating that storage so it is going to fail. There's probably better error messages we can provide to try and make that explicit.

If anyone is still hitting issues here, please file a _new_ bug and provide full 'virt-clone --debug' or 'virt-manager --debug' output when reproducing, as well as the 'sudo virsh pool-dumpxml $poolname' for any involved gluster pools


Note You need to log in before you can comment on or make changes to this bug.