Bug 1488859 - [gfapi] Application VMs goes in to non-responding state
Summary: [gfapi] Application VMs goes in to non-responding state
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhi-1.1
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1488863 1515149
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-06 11:06 UTC by SATHEESARAN
Modified: 2019-04-17 12:16 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1488863 (view as bug list)
Environment:
Last Closed: 2019-04-17 12:16:47 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2017-09-06 11:06:56 UTC
Description of problem:
-----------------------
With the RHHI setup with RHV 4.1.6 & RHGS 3.3.0 interim, the application VMs goes in to non-responding state after some idle time.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHEL 7.4 Server
RHGS 3.3.0 RC ( glusterfs-3.8.4-43.el7rhgs )
RHV 4.1.6 nightly

How reproducible:
-----------------
2/2

Steps to Reproduce:
-------------------
1. Create a 3 node cluster with gluster and virt capability enabled.
2. Enable libgfapi access by editing the cluster
3. Create a RHEL 7.4 VM, seal the VM and create a template.
4. Using the template, create 30 VMs across the cluster.
5. Leave the setup idle for sometime

Actual results:
---------------
After sometime, VMs gone in to non-responding state

Expected results:
-----------------
VMs should be up and healthy

Comment 1 Sahina Bose 2017-10-11 11:16:49 UTC
Moving to 1.2, as this issue happens only with gfapi

Comment 2 Sahina Bose 2018-10-15 05:58:54 UTC
Setting to medium - as gfapi change in limbo


Note You need to log in before you can comment on or make changes to this bug.