Red Hat Bugzilla – Bug 860999
[RHEVM]Unable to create VM from the RHEVM
Last modified: 2015-08-10 15:30:09 EDT
Created attachment 617978 [details]
Description of problem:
Unable to create VM from the RHEVM. Detailed steps mentioned below
Version-Release number of selected component (if applicable):
[root@rhs-client6 ~]# gluster --version
glusterfs 3.3.0rhsvirt1 built on Sep 25 2012 14:53:06
[root@rhs-client6 ~]# rpm -qa | grep gluster
1. 2*2 distributed replicate volume formed from 4 servers, one brick from each server. Server and brick details:
> rhs-client6.lab.eng.blr.redhat.com:/disk1 (/dev/mapper/vg1-lvol0 mounted on /disk1)
> rhs-client7.lab.eng.blr.redhat.com:/disk1 (/dev/mapper/vg1-lvol0 mounted on /disk1)
> rhs-client8.lab.eng.blr.redhat.com:/disk1 (/dev/mapper/vg1-lvol0 mounted on /disk1)
> rhs-client9.lab.eng.blr.redhat.com:/disk1 (/dev/mapper/vg1-lvol0 mounted on /disk1)
Size of each /disk1 is 500 GB
Note: the /dev/mapper/vg1-lvol0 is mounted on /disk1 but there was no entry in /etc/fstab
2. Created 5 VM's from RHEVM.
3. Powered OFF client6 and client8.
4. Stoped and removed all the existing 5 VM's.
5. Powered ON client6 and client8.
6. Since the entry was not put in /etc/fstab. All the files on client6 and client8 got self-healed to /disk1 on the local drive (not /dev/mapper/vg1-lvol0)
7. Self-heal was successful.
8. Powered OFF client7 and client9.
10. Created new VM's from RHEVM.
11. Powered ON client7 and client9.
12. Since the entry was not put in /etc/fstab. All the files on client7 and client9 getting self-healed to /disk1 on the local drive (not /dev/mapper/vg1-lvol0)
13. Since /dev/mapper/vg1-lvol0 was not mounted because of no entry in fstab, we manually mounted and added the entry to /etc/fstab.
14. Try to create new VM's from RHEVM. Which is unable to create. The behaviour is as follows
> We are able to select template for the VM creation
> Clicked OK to proceed.
> It shows the VM entry with image-locked status.
> After 10-15 secs, the VM is no more listed in RHEVM tab.
Also, following are the error messages logged in brick logs:
[2012-09-27 05:59:52.415338] E [posix-aio.c:248osix_aio_writev_complete] 0-dist-rep-rhevh-posix: writev(async) failed fd=33,offset=0 (-22/Invalid argument)
[2012-09-27 05:59:52.415386] I [server3_1-fops.c:1414:server_writev_cbk] 0-dist-rep-rhevh-server: 52063: WRITEV 0 (5dd06050-9f23-47fd-9d4f-9776e89502de) ==> -1 (Invalid
[2012-09-27 05:59:52.425487] E [posix-aio.c:248osix_aio_writev_complete] 0-dist-rep-rhevh-posix: writev(async) failed fd=33,offset=0 (-22/Invalid argument)
[2012-09-27 05:59:52.425523] I [server3_1-fops.c:1414:server_writev_cbk] 0-dist-rep-rhevh-server: 52094: WRITEV 0 (5dd06050-9f23-47fd-9d4f-9776e89502de) ==> -1 (Invalid
From RHEVM alerts:
2012-Sep-27, 13:14:47Failed to create VM test_vm1 (User: admin@internal).
Failed to create VM
Creation of VM should be successful
brick log is attached
noticed this bug now as it was in Pranith's name. This issue is same as bug 859406
patch submitted @ https://code.engineering.redhat.com/gerrit/#/c/61/ and fix included in glusterfs-3.3.0rhsvirt1-7.el6rhs.x86_64 release.
Verified as per the steps. VM creation is successful.
Verified with build: "Beta - RHS 2.0 with virt support" (glusterfs-3.3.0rhsvirt1-8.el6rhs.x86_64)