+++ This bug was initially created as a clone of Bug #1559827 +++
Description of problem:
While running a Hosted Engine VM on a replicated volume it got crashed. The HE VM was seen in paused state.
Version-Release number of selected component (if applicable):
rhv-release-4.2.2-6-001.noarch
glusterfs-3.12.2-5.el7rhgs.x86_64
How reproducible:
Hit once
Steps to Reproduce:
1. Create a replicated Volume (1*3) on VDO
2. Hosted Engine VM running its image on gluster volume
Actual results:
Saw FUSE mount crash
Expected results:
FUSE mount should not crash
Additional info:
1. Cluster info
----------------
This is a 3 node cluster
2. Volume info
---------------
Volume Name: engine
Type: Replicate
Volume ID: 5c2e098f-8fe6-44ef-a786-23909ec63bfe
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.241:/gluster_bricks/engine/engine
Brick2: 10.70.36.242:/gluster_bricks/engine/engine
Brick3: 10.70.36.243:/gluster_bricks/engine/engine
Options Reconfigured:
auth.ssl-allow: rhsqa-grafton7.lab.eng.blr.redhat.com,rhsqa-grafton8.lab.eng.blr.redhat.com,rhsqa-grafton9.lab.eng.blr.redhat.com
server.ssl: on
client.ssl: on
features.shard-block-size: 64MB
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@rhsqa-grafton7-nic2 schema]# gluster volume status engine
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.36.241:/gluster_bricks/engine/e
ngine 49152 0 Y 34550
Brick 10.70.36.242:/gluster_bricks/engine/e
ngine 49152 0 Y 24637
Brick 10.70.36.243:/gluster_bricks/engine/e
ngine 49152 0 Y 44676
Self-heal Daemon on localhost N/A N/A Y 37166
Self-heal Daemon on 10.70.36.243 N/A N/A Y 45291
Self-heal Daemon on 10.70.36.242 N/A N/A Y 26352
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
3. Other information
---------------------
1. Gluster encryption is enabled on management and data path
2. Sharding is enabled on this volume with shard-block-size is 64MB
Verified the bug with the below component
Component versions's:
glusterfs-fuse-3.12.2-6.el7rhgs.x86_64
glusterfs-3.12.2-6.el7rhgs.x86_64
glusterfs-server-3.12.2-6.el7rhgs.x86_64
glusterfs-libs-3.12.2-6.el7rhgs.x86_64
Steps:
1. Installed RHV-H on my hosts
2. Did the gluster deployment followed by the HE deployment
3. Created VM's and performed few IO operation.
Result:
Could not see any crashes
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2018:2607