Bug 1648889 - [Test Only]Need to cover up full test matrix for backup and restore.
Summary: [Test Only]Need to cover up full test matrix for backup and restore.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-setup
Version: 4.2.7
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ovirt-4.2.8
: ---
Assignee: Simone Tiraboschi
QA Contact: meital avital
URL:
Whiteboard:
Depends On: 1657767
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-12 11:45 UTC by Nikolai Sednev
Modified: 2019-04-28 09:20 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-13 07:35:04 UTC
oVirt Team: Integration
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Nikolai Sednev 2018-11-12 11:45:47 UTC
Description of problem:
Need to cover up full test matrix for backup and restore from https://bugzilla.redhat.com/show_bug.cgi?id=1638096#c20.

Version-Release number of selected component (if applicable):
ovirt-hosted-engine-setup-2.2.32-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.18-1.el7ev.noarch

Steps to Reproduce:

1. storage (nfs/gluster hc/gluster on external nodes/iscsi/fc):
 - redeploy over the same storage type but a different mount point (or lun uuid) (eg from nfs to nfs on different server)
 - move from the same technology to a different one (eg from iscsi to nfs)
 - redeploy in place over the same mount point after manually cleaning it (eg the same lun, the same nfs mount point)

2. hosts:
 - redeploy on a new host that wasn't part of the environment
 - reuse an existing HE host
 - promote a non HE host to HE hosts set
 - redeploy on an host that was already part of the env but the host OS has been redeployed

3. backup source:
 - non HE env to HE
 - HE deployed with node 0 flow
 - HE deployed with vintage flow

4. other VMs:
 - redeploy with running VMs on other non HE hosts
 - redeploy with running VMs on HE hosts

5. SPM:
 - redeploy on SPM host
 - redeploy on non SPM host
 - redeploy on an env were the SPM host is not what we have in the backup

6. SPM id:
 - redeploy on host with spm_id=1
 - redeploy on host with spm_id!=1

7. Master storage domain
 - redeploy on an environment where the master storage domain as reported in the backup is still alive
 - redeploy on an environment where the master storage domain as reported in the backup is not available and the engine has to elect a new SD

8. power management
 - redeploy on an env where power management is configured and all the hosts could be reached
 - redeploy on an env where power management is configured and we have unreacable hosts
 - redeploy on an env where power management is not configured

As something we have absolutely to cover for 4.2.7 I'd say:
- one attempt over per SD technology: nfs/gluster hc/gluster on external nodes/iscsi/fc
- one attempt deploying on the same SD, one changing, and one from bare metal to HE
- one attempt with a backup on a system initially deployed with node 0 one from a backup of system initially deployed with the vintage flow

So I think we need at least 6 attempts to have a reasonable testing coverage.

(Originally by Simone Tiraboschi)

Expected results:
All iterations must be covered and pass.

Additional info:
https://bugzilla.redhat.com/show_bug.cgi?id=1638096#c20

Comment 2 Sandro Bonazzola 2018-11-26 08:49:24 UTC
Moving to QE being test only

Comment 3 Nikolai Sednev 2019-01-23 09:59:50 UTC
Testing matrix iterations performed:

Basic testing environment consisted of 3 or 2 ha-hosts, one iso domain, 4 guest-VMs, engine-VM, data storage domain for guest-VMs, either on FC or NFS or iSCSI or gluster.

Components verification on :
ovirt-engine-setup-4.2.8.2-0.1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.19-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.33-1.el7ev.noarch
rhvm-appliance-4.2-20190108.0.el7.noarch
Linux 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 15 17:36:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.6 (Maipo)

Matrix Vintage to Node0:
1 . NFS to NFS - Pass with this configuration -> Redeploy on SPM host with spm_id=1 && power management configured and all hosts could be reached
2 . iSCSI to iSCSI - Pass  with this configuration -> Redeploy on non SPM host with spm_id!=1 && power management configured and some hosts unreachable
3 . Gluster to iSCSI - Pass  with this configuration -> Redeploy on an env where the master storage domain as reported in the backup is still alive && power management configured and all hosts unreachable
4 . FC to NFS - Pass with https://bugzilla.redhat.com/show_bug.cgi?id=1665138 exception.  with this confiuration -> Redeploy on env where SPM is not what we have in the backup && power management not configured

Matrix Node0 to Node0:
1. NFS to FC - Pass  with this configuration ->Redeploy on SPM host with spm_id=1 && power management configured and all hosts could be reached
2. iSCSI to Gluster -Pass  with this configuration ->Redeploy on non SPM host with spm_id!=1 && power management configured and some hosts unreachable
3. Gluster to iSCSI - Fail https://bugzilla.redhat.com/show_bug.cgi?id=1667708, with this configuration ->Redeploy on env where SPM is not what we have in the backup && power management not configured
4.   FC to NFS Fail https://bugzilla.redhat.com/show_bug.cgi?id=1644748, with this configuration ->Redeploy on an env where the master storage domain as reported in the backup is still alive && power management configured and all hosts unreachable

Matrix bare metal to Node0:
1. Local to NFS - Pass  with this configuration -> Redeploy on SPM host with spm_id=1 && power management configured and all hosts could be reached
2. Local to iSCSI - Pass  with this configuration -> Redeploy on non SPM host with spm_id!=1 && power management configured and some hosts unreachable
3. Local to Gluster - Pass  with this configuration -> Redeploy on env where SPM is not what we have in the backup && power management not configured
4. Local to FC - Pass  with this configuration -> Redeploy on an env where the master storage domain as reported in the backup is still alive && power management configured and all hosts unreachable

*Across all storage and matrix types, currently limited by https://bugzilla.redhat.com/show_bug.cgi?id=1648987:
Redeploy on an environment where the master storage domain as reported in the backup is not available and the engine has to elect a new master SD

Moving to verified.


Note You need to log in before you can comment on or make changes to this bug.