============= RFE REQUEST ================ 1. Proposed title of this feature request SRM - site recovery manager for disaster recovery in RHEV 3. What is the nature and description of the request? Customer is looking for a feature like VMware site recovery manager for disaster recovery. Like taking a VM from on RHEV site and failing it over to another. VMware uses a tool called SRM which creates a plan that will break our snapmirror (NetApp NAS/SAN) between the Prod and DR site and will mount once broken. https://www.vmware.com/products/site-recovery-manager Customer wants this feature for RHEV. 4. Why does the customer need this? (List the business requirements here) > To failover our Production environment to our Disaster recovery site. 5. How would the customer like to achieve this? (List the functional requirements here) > A pluging that would see both our storage devices (netapp filers) on both end (RHEV prod, RHEV colo) that would allow us to create a recovery plan and automatically spin up the VMs when the plan is initiated. This would see the array replication and make corrective actions when needed. https://www.youtube.com/watch?v=kgZ62GS21Vc (great video on how SRM works) 6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. 7. Is there already an existing RFE upstream or in Red Hat Bugzilla? > No 8. Does the customer have any specific timeline dependencies and which release would they like to target ? > None, we have a manual way of doing it now. Breaking a snapmirror and turning the host on the other side. 9. Is the sales team involved in this request and do they have any additional input? 10. List any affected packages or components. 11. Would the customer be able to assist in testing this functionality if implemented? > Possibly, depending if this does not affect our production environment.
*** Bug 890655 has been marked as a duplicate of this bug. ***
*** Bug 1021066 has been marked as a duplicate of this bug. ***
*** Bug 1117056 has been marked as a duplicate of this bug. ***
*** Bug 1395763 has been marked as a duplicate of this bug. ***
Site to site: Covered all the feature's functionality, including fail over and fail back (which include generate mapping and validator). According to Polarion test plan external tracker attached. Used: ovirt-ansible-disaster-recovery-0.4-1.el7ev.noarch ansible-2.5.2-1.el7ae.noarch Active active: Certified according to https://docs.google.com/document/d/1-ZWco1Z-BcTJezqSGhdQC9gBSC79sw9cDiq7Dfx_czE/edit
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:1488
BZ<2>Jira Resync