Bug 848336 - unable to create guestOS using KVM on a glusterfs mount. (Works for nfs mount)
Summary: unable to create guestOS using KVM on a glusterfs mount. (Works for nfs mount)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: fuse
Version: 2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: shishir gowda
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 770341
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-15 09:44 UTC by Vidya Sakar
Modified: 2013-12-09 01:33 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 770341
Environment:
Last Closed: 2013-09-23 22:36:18 UTC
Embargoed:


Attachments (Terms of Use)

Description Vidya Sakar 2012-08-15 09:44:59 UTC
+++ This bug was initially created as a clone of Bug #770341 +++

Created attachment 549540 [details]
Strace of client-glusterfs process during the creation of guestos.

Description of problem: 
Unable to create a guestOS using KVM on a glusterfs mount. The same works for the nfs mount

Version-Release number of selected component (if applicable):
glusterfs 3.3.0qa17

How reproducible:


Steps to Reproduce:
1. Create a volume with replica 2-> vm_replica
2. Mount to volume(vm_replica) from client (Ex: mount-type:glusterfs, mountdir:/mnt/vmstore) 
3. Create a new guestOS using virt-manager. 
4. Select the storage location /mnt/vmstore while creating guestos. 
5. Complete the all steps for creting new guest. Final stage, reports an error. 
  
Actual results:

Unable to complete install: 'internal error Process exited while reading console log output: char device redirected to /dev/pts/2
qemu-kvm: -drive file=/mnt/vmstore/cent.img,if=none,id=drive-ide0-0-0,format=raw,cache=none: could not open disk image /mnt/vmstore/cent.img: Invalid argument
'

Traceback (most recent call last):
  File /usr/share/virt-manager/virtManager/asyncjob.py, line 44, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File /usr/share/virt-manager/virtManager/create.py, line 1903, in do_install
    guest.start_install(False, meter=meter)
  File /usr/lib/python2.6/site-packages/virtinst/Guest.py, line 1223, in start_install
    noboot)
  File /usr/lib/python2.6/site-packages/virtinst/Guest.py, line 1291, in _create_guest
    dom = self.conn.createLinux(start_xml or final_xml, 0)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 2064, in createLinux
    if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self)
libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/2
qemu-kvm: -drive file=/mnt/vmstore/cent.img,if=none,id=drive-ide0-0-0,format=raw,cache=none: could not open disk image /mnt/vmstore/cent.img: Invalid argument


Expected results:

Successful creation of the guestos

Additional info:

Comment 2 Amar Tumballi 2012-08-23 16:13:36 UTC
need to check this on RHS as I believe the latest master has already fixes for this.

Comment 3 shishir gowda 2012-09-13 07:22:04 UTC
Creation of vm's fails due to ownership issues on the volume.
Changing the ownership to 36:36 on the volume (by changing these manually on all exports of the volume) fixes this issue.
Can you please check if the issues gets fixed?

Comment 4 SATHEESARAN 2013-01-08 14:23:39 UTC
Also selinux status on Client side[where the volume is fuse mounted], should be enforcing with selinux bools set. I could able to create VM without changing the ownership of volume to 36.36

Comment 5 SATHEESARAN 2013-01-08 14:24:17 UTC
Verified this on glusterfs 3.4.0 qa5
Moving this to VERIFIED state

Comment 6 SATHEESARAN 2013-01-09 07:06:21 UTC
Verified this also on RHS2.0-latest [glusterfs-3.3.0.5rhs-40.el6rhs ]

Comment 8 Scott Haines 2013-09-23 22:36:18 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.