Bug 1242913 - Debian Jessie as KVM guest on GlusterFS backend
Summary: Debian Jessie as KVM guest on GlusterFS backend
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 3.6.3
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Anoop
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-14 11:49 UTC by Roman
Modified: 2015-07-16 12:40 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-16 12:40:39 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
debian installation fail (86.33 KB, image/png)
2015-07-14 11:49 UTC, Roman
no flags Details

Description Roman 2015-07-14 11:49:26 UTC
Created attachment 1051788 [details]
debian installation fail

Description of problem:
I'm having problems with installing D8 as KVM guest on GlusterFS storage backend.
I run 4 different proxmox (debian based with RH kernel) nodes and got this problem on every of them.

Versions:

Kernel: 2.6.32-37-pve
qemu-server: 3.4-6
pve-qemu-kvm: 2.2-10
glusterfs-client: 3.6.4-1

No matter what I do, I'm not even able to complete the installation process, it stops on random step:

1. on mirror select (just suddenly says, that mirror is not accessible or there is no right version of Debian located)
2. on package installation step (just says that it is not able to solve deps as some pkg was not configured, usually it is python-gtk2 or smt like that).

Once I was lucky enough to install the base system, but ended up with unstable system: apache did't run due to missing modules, while they were at their places, seemed to me like corrupted. Also it seems to me, that files are getting corrupted while installed on glusterfs storage backend.

I can install D8 on local storage without problems. 

There are no error logs on glusterfs side.

Debian 7 installs and runs fine. Ubuntu 14.04 LTS installs and runs fine. Centos 6 installs an runs fine.
Using local disk for installation working fine!

Any ideas? This has to be solved somehow. I'm also in contact with GlusterFS users list and wrote to devel list, but was not able to solve it.

I used glusterfs 3.5.4 before, was the same problem.


Version-Release number of selected component (if applicable): 3.6.4, 3.5.4


How reproducible:
Easy.


Steps to Reproduce:
1. Install Proxmox
2. Crate Debian 8 based VM
3. Start installation process.

Actual results:
Installation process crashes on random step, installation logs show only information about failed pkg installation step or something similar to the screenshot attached.

Expected results:
Complete installation.

Additional info:

Comment 2 Roman 2015-07-14 12:48:26 UTC
Steps to install proxmox:
1. get iso http://proxmox.com/en/downloads/category/iso-images-pve
2. burn
3. install (it won't install on fake raid, use ZFS instead)
4. edit the /etc/apt/sources.list.d/pve-enterprise.list and comment the line.\
5. add this line to /etc/apt/sources.list
# PVE packages provided by proxmox.com
deb http://download.proxmox.com/debian wheezy pve-no-subscription
6. add gluster repo:
deb http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/Debian/wheezy/apt wheezy main

after installation complete, go to your VE using browser:
https://host:8006
login with root account, add gluster storage via WEB GUI (Datacenter>Storage>Add)

To add a VM one need an ISO, get it from debian.org
upload to VE using the WEB GUI (choose the local storage from the left menu, then contents and press upload)

Next: Press ADD VM to the right and follow step-by-step instructions. Chose virt whereever it is possible to (drive and network)

Comment 3 Roman 2015-07-15 13:07:38 UTC
Hey,

I've noticed such strange thing. When I create a VM there are created 2 things with it: a folder with it's name number and it's virtual disk.
Then I see these logs on the glusterfs client side (proxmox server)

[2015-07-15 11:27:45.553952] I [dht-selfheal.c:1065:dht_selfheal_layout_new_directory] 0-HA-1TB-S14A4F-pve-dht: chunk size = 0xffffffff / 1032124 = 0x1041
[2015-07-15 11:27:45.554017] I [dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-HA-1TB-S14A4F-pve-dht: assigning range size 0xfffb6ebc to HA-1TB-S14A4F-pve-replicate-0
[2015-07-15 11:27:45.555603] I [MSGID: 109036] [dht-common.c:6296:dht_log_new_layout_for_dir_selfheal] 0-HA-1TB-S14A4F-pve-dht: Setting layout of /images/232 with [Subvol_name: HA-1TB-S14A4F-pve-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 ],

This seems normal as this happens with each new created VM. We can see, that new dir was created /images/232.

the disk is:
root# ls -l /mnt/pve/HA-1TB-S14A4F-pve/images/232/vm-232-disk-1.qcow2
-rw------- 1 root root 16111435776 Jul 15 15:58 /mnt/pve/HA-1TB-S14A4F-pve/images/232/vm-232-disk-1.qcow2

which is ca 15 GB.

If I fire up the VM right the way and install it (with debian 8 only) then it fails during install or installs with corrupted binaries. If I'll delete the virtual disk (file of 15 GB) and add new one and start an installation process again I'm likely to end up with working system...

What am I thinking of... could it be due to selfheal or somehting like this. It really looks like during installation something happens to virtual disk file after what debian8 treats it some wrong way. Some times I ended up with RO file system after installation...

Any ideas?

Comment 4 Roman 2015-07-15 14:25:55 UTC
May be I should try without selfheal?

Comment 5 Roman 2015-07-16 12:40:39 UTC
Sorry for your time, guys.

It seems like the problem is related to Proxmox qemu version, which does not like D8 virtio drivers. 

As soon as I choose sata or scsi controller for VM with d8, it installs and runs without any problem. At least I've managed to install 3 instances of d8 without any problem.

I will try to get help from there.


Note You need to log in before you can comment on or make changes to this bug.