Bug 1109796

Summary: [RFE] Add custom fence agents support
Product: [Retired] oVirt Reporter: Eli Mesika <emesika>
Component: ovirt-engine-coreAssignee: Eli Mesika <emesika>
Status: CLOSED CURRENTRELEASE QA Contact: sefi litmanovich <slitmano>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.5CC: emesika, gklein, iheim, rbalakri, slitmano, yeylon
Target Milestone: ---Keywords: FutureFeature
Target Release: 3.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: infra
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-10-17 12:34:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Infra RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 878662    

Description Eli Mesika 2014-06-16 11:38:01 UTC
Description of problem:
Up to now if a customer needed to add any custom fence agent that is not listed in oVirt, he had to tweak the original relevant oVirt configuration values in order to achieve that.

The main problem of the above method except its complexity is that it does not survive product upgrades and custom settings are lost after upgrading oVirt.

oVirt should provide a friendly method of how to add custom fence agents and preserve those changes upon oVirt upgrades

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 sefi litmanovich 2014-08-12 11:33:31 UTC
Basic functionality is working well as per:
https://tcms.engineering.redhat.com/run/164181/

issues:

after adding a fence agent type name "test_fence_agent" and copied at the proxy host the ipmilan fence script to a new script "fence_test_fence_agent" I tried some configurations for CustomVdsFenceOptionMapping parameter.


1. CustomVdsFenceOptionMapping="test_fence_Agent:" (same as ipmilan mapping) - restart engine - mapping in ui is similar to ipmilan - test works.

in proxy host vdsm.log:

Thread-45::DEBUG::2014-08-12 12:29:08,838::API::1153::vds::(fenceNode) fenceNode(addr=rose07-mgmt.qa
.lab.tlv.redhat.com,port=,agent=test_fence_agent,user=root,passwd=XXXX,action=status,secure=,options
=)

2. CustomVdsFenceOptionMapping="test_fence_Agent:port=invalid" - restart engine - mapping in ui has ssh port field with no value. - test works.

same message in vdsm.log (port=,agent.....), port didn't get the invalid value.

3. CustomVdsFenceOptionMapping="test_fence_Agent:port=ipport" - restart engine - mapping in ui has ssh port field with no value. - test works.

same message in vdsm.log (port=,agent.....), port didn't get any value.

4. CustomVdsFenceOptionMapping="test_fence_Agent:blabla=invalid"- restart engine - mapping in ui is similar to ipmilan - test works.

same vdsm message.

5. CustomVdsFenceOptionMapping="different_fence_Agent:" - restart engine - mapping has all the different fields (as default apc) - test fails with message:

'Power Management test failed for Host monique-vds01.tlv.redhat.com.There is no other host in the data center that can be used to test the power management settings.'

Basically it seems that defining an agent of type name X and providing a script fence_X and configuring any sort of mapping is enough for the agent to work.
Is this the expected functionality, when does 'CustomVdsFenceOptionMapping' values make a difference?

Comment 3 sefi litmanovich 2014-08-25 12:42:12 UTC
Verified with ovirt-engine-3.5.0-0.0.master.20140821064931.gitb794d66.el6.noarch according to TCMS plan 14443.

Comment 4 Sandro Bonazzola 2014-10-17 12:34:49 UTC
oVirt 3.5 has been released and should include the fix for this issue.