Bug 1250786 - linux kvm vm will hang somtimes when execute dd command
Summary: linux kvm vm will hang somtimes when execute dd command
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: 3.7.3
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: SATHEESARAN
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-08-06 02:52 UTC by fengjian
Modified: 2023-09-18 00:11 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-08 10:54:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description fengjian 2015-08-06 02:52:39 UTC
Description of problem:

in linux kvm guest vm:
#dd if=/dev/zero of=./abc.img bs=200M count=100
vm  will  hang somtimes.

Version-Release number of selected component (if applicable):
glusterfs 3.7.3

How reproducible:
setup centos6.5(Installation process will also hang).
Multiple execute:
#dd if=/dev/zero of=./abc.img bs=200M count=100
During the execution of the command, multiple clicks on the Desktop Menu.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 SATHEESARAN 2015-08-06 03:05:14 UTC
Hi fengjian,

I am just trying to understand your usecase.

Are you trying to use glusterfs as the backend for your VM, and accessing it using libgfapi ?

Could you explain your setup/configuration ?

Comment 2 liuyanli 2015-08-06 04:32:16 UTC
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
	<name>one-135</name>
	<vcpu>4</vcpu>
	<cputune>
		<shares>4096</shares>
	</cputune>
	<memory>4194304</memory>
	<os>
		<type arch='x86_64'>hvm</type>
		<boot dev='hd'/>
		<boot dev='cdrom'/>
	</os>
	<devices>
		<emulator>/usr/libexec/qemu-kvm</emulator>
		<disk type='network' device='disk'>
			<source protocol='gluster' name='sr32on104on32/54f4f8b18deae27515391a987a03da89'>
				<host name='192.168.10.31' port='24007' transport='tcp'/>
			</source>
			<target dev='vda'/>
			<driver name='qemu' type='raw' cache='none'/>
		</disk>
		<disk type='network' device='cdrom'>
			<source protocol='gluster' name='sr32on104on32/ba78f17f6e233e21225b1ac453905f7b'>
				<host name='192.168.10.31' port='24007' transport='tcp'/>
			</source>
			<target dev='hda'/>
			<readonly/>
			<driver name='qemu' type='raw' cache='none'/>
		</disk>
		<disk type='network' device='cdrom'>
			<source protocol='gluster' name='sr32on104on32/822dd0beb7924a6756b01e76277f563d'>
				<host name='192.168.10.31' port='24007' transport='tcp'/>
			</source>
			<target dev='hdb'/>
			<readonly/>
			<driver name='qemu' type='raw' cache='none'/>
		</disk>
		<disk type='file' device='cdrom'>
			<source file='/var/lib/one//datastores/101/135/disk.3'/>
			<target dev='hdc'/>
			<readonly/>
			<driver name='qemu' type='raw'/>
		</disk>
		<interface type='bridge'>
			<source bridge='br0'/>
			<mac address='02:00:c0:a8:0a:9b'/>
			<model type='virtio'/>
			<filterref filter='clean-traffic'>
				<parameter name='IP' value='192.168.10.155'/>
			</filterref>
		</interface>
		<graphics type='vnc' listen='0.0.0.0' port='6035'/>
		<input type='mouse' bus='usb'/>
	</devices>
	<features>
		<pae/>
		<acpi/>
		<apic/>
	</features>
	
	<devices><serial type="pty"><source path="/dev/pts/5"/><target port="0"/></serial><console type="pty" tty="/dev/pts/5"><source path="/dev/pts/5"/><target port="0"/></console></devices>
</domain>

Comment 3 liuyanli 2015-08-07 05:29:59 UTC
(In reply to SATHEESARAN from comment #1)
> Hi fengjian,
> 
> I am just trying to understand your usecase.
> 
> Are you trying to use glusterfs as the backend for your VM, and accessing it
> using libgfapi ?
> 
> Could you explain your setup/configuration ?

Hi SATHEESARAN,

yes. we used glusterfs as the backend for our VM, and accessing it using libgfapi.

this is our deploy xml:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
	<name>one-135</name>
	<vcpu>4</vcpu>
	<cputune>
		<shares>4096</shares>
	</cputune>
	<memory>4194304</memory>
	<os>
		<type arch='x86_64'>hvm</type>
		<boot dev='hd'/>
		<boot dev='cdrom'/>
	</os>
	<devices>
		<emulator>/usr/libexec/qemu-kvm</emulator>
		<disk type='network' device='disk'>
			<source protocol='gluster' name='sr32on104on32/54f4f8b18deae27515391a987a03da89'>
				<host name='192.168.10.31' port='24007' transport='tcp'/>
			</source>
			<target dev='vda'/>
			<driver name='qemu' type='raw' cache='none'/>
		</disk>
		<disk type='network' device='cdrom'>
			<source protocol='gluster' name='sr32on104on32/ba78f17f6e233e21225b1ac453905f7b'>
				<host name='192.168.10.31' port='24007' transport='tcp'/>
			</source>
			<target dev='hda'/>
			<readonly/>
			<driver name='qemu' type='raw' cache='none'/>
		</disk>
		<disk type='network' device='cdrom'>
			<source protocol='gluster' name='sr32on104on32/822dd0beb7924a6756b01e76277f563d'>
				<host name='192.168.10.31' port='24007' transport='tcp'/>
			</source>
			<target dev='hdb'/>
			<readonly/>
			<driver name='qemu' type='raw' cache='none'/>
		</disk>
		<disk type='file' device='cdrom'>
			<source file='/var/lib/one//datastores/101/135/disk.3'/>
			<target dev='hdc'/>
			<readonly/>
			<driver name='qemu' type='raw'/>
		</disk>
		<interface type='bridge'>
			<source bridge='br0'/>
			<mac address='02:00:c0:a8:0a:9b'/>
			<model type='virtio'/>
			<filterref filter='clean-traffic'>
				<parameter name='IP' value='192.168.10.155'/>
			</filterref>
		</interface>
		<graphics type='vnc' listen='0.0.0.0' port='6035'/>
		<input type='mouse' bus='usb'/>
	</devices>
	<features>
		<pae/>
		<acpi/>
		<apic/>
	</features>
	
	<devices><serial type="pty"><source path="/dev/pts/5"/><target port="0"/></serial><console type="pty" tty="/dev/pts/5"><source path="/dev/pts/5"/><target port="0"/></console></devices>
</domain>

Comment 4 Niels de Vos 2015-08-18 12:48:10 UTC
Could also explain a little more about your Gluster environment? The type of the volumes (output of 'gluster volume info') would be the minimal details that we need.

Do you have any messages in the gluster, libvirt or qemu logs?

It is also useful to know where the VM is running, is this on a Gluster storage server or on a different system?

Comment 5 Kaushal 2017-03-08 10:54:10 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Comment 6 liuyanli 2023-04-14 06:27:39 UTC
This bug is closed.

Comment 7 Red Hat Bugzilla 2023-09-18 00:11:43 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.