Bug 1575869 - QEMU-KVM crash seen while powering off the VM's
Summary: QEMU-KVM crash seen while powering off the VM's
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1575872
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-08 06:42 UTC by bipin
Modified: 2018-06-12 17:27 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1575872 (view as bug list)
Environment:
Last Closed: 2018-06-12 17:27:14 UTC
Embargoed:


Attachments (Terms of Use)

Description bipin 2018-05-08 06:42:14 UTC
Description of problem:
During powering off the VM's could see qemu-kvm getting crashed. Also was running some random IO on the VM's. 


Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.10.0-21.el7_5.1.x86_64
libvirt-daemon-kvm-3.9.0-14.el7_5.2.x86_64
qemu-kvm-common-rhev-2.10.0-21.el7_5.1.x86_64
redhat-release-virtualization-host-4.2-3.0.el7.x86_64


How reproducible:
1/1


Steps to Reproduce:
1. HostedEngine deployed on top of VDO volumes
2. Create multiple vm's and perform IO operation's on it
3. Stopped VM's

Actual results:
Saw a qemu-kvm crash

Expected results:
Should not see crash

Additional info:

1. 3 node cluster

2. [root@rhsqa-grafton7-nic2 yum.repos.d]# gluster volume info
 
Volume Name: VDO_Test
Type: Replicate
Volume ID: b8f6808f-3d6f-4d6d-af8f-f9857ab01a5e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.45.29:/gluster_bricks/VDO_Test/VDO_Test
Brick2: 10.70.45.30:/gluster_bricks/VDO_Test/VDO_Test
Brick3: 10.70.45.31:/gluster_bricks/VDO_Test/VDO_Test
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
 
Volume Name: data
Type: Replicate
Volume ID: c5fce34a-62fd-4ec7-ad79-a820c2d8ebde
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.45.29:/gluster_bricks/data/data
Brick2: 10.70.45.30:/gluster_bricks/data/data
Brick3: 10.70.45.31:/gluster_bricks/data/data
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
 
Volume Name: engine
Type: Replicate
Volume ID: 528b01a9-d780-440a-a75c-d5928223d6d6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.45.29:/gluster_bricks/engine/engine
Brick2: 10.70.45.30:/gluster_bricks/engine/engine
Brick3: 10.70.45.31:/gluster_bricks/engine/engine
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
 
Volume Name: vmstore
Type: Replicate
Volume ID: 1bee697d-dcf8-4358-94d4-61fac6aabddc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.45.29:/gluster_bricks/vmstore/vmstore
Brick2: 10.70.45.30:/gluster_bricks/vmstore/vmstore
Brick3: 10.70.45.31:/gluster_bricks/vmstore/vmstore
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


Note You need to log in before you can comment on or make changes to this bug.