Bug 1553782

Summary: don't allow deletion of unmanaged resource
Product: Red Hat Enterprise Linux 8 Reporter: Josef Zimek <pzimek>
Component: pcsAssignee: Tomas Jelinek <tojeline>
Status: CLOSED WONTFIX QA Contact: cluster-qe <cluster-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.0CC: bugzilla, cfeist, cluster-maint, idevat, mlisik, mpospisi, omular, tojeline
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-20 07:27:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1420298    
Bug Blocks:    

Description Josef Zimek 2018-03-09 14:27:44 UTC
Description of problem:

`pcs resource unmanage` is intended to get cluster's hands off the resource in order to perform i.e. maintenance (only monitoring is still enabled but failure is not considered critical while unamanged). If resource is deleted while in unmanaged state it ends up in ORPHANED state - it is removed from cib but still present in running configuration. This can cause various issues i.e. when unmanaged resource is stopped manually outside of cluster there might be problems with stopping the resource upon deletion (while unmanaged) which may end up with stonith being initiated - this is not desired with unmanaged resource.

Upon deletion of resource we should check if it is unmanaged. If yes, deletion should fail with warning that resource must be managed in order to delete. This will prevent subsequent problems. 

(Subsequent problems depend on type of resource. Some are less sensitive to this some more i.e. Oracle resource - when Oracle DB is stopped outside cluster while resource is unmanaged and then resource is deleted monitoring/stop fails and it is considered critical and leads to fencing).

Part of resource deletion is stop operation which may eventually fail (based on manual intervention on resource outside cluster). If user wants to delete resource it should be done while resource is managed so stop/monitoring operation can proceed gracefully. 


Version-Release number of selected component (if applicable):
pcs-0.9.152-10.el7.x86_64 
pacemaker-1.1.15-11.el7_3.2.x86_64



How reproducible:
depends on manual actions performed on resource while unmanaged - i.e. stopping outside cluster and then deleting resource via pcs. i.e. IPaddr2 resource just ends up in ORPHANED state but oracle resource fails to stop/monitor

Steps to Reproduce:
1. stop oracle db manuall outside cluster while oracle resource is unmanaged (make sure PID doesn't exist)
2. delete the oracle resource
3. node gets fenced upon stop/monitoring failure - (but monitoring should not be critical while unmanaged)

Actual results:
unmanaged resource gets deleted and resource ends up ORPHANED which may lead to various problems

Expected results:
check for unmanage flag when deleting resource. resource deletion is not allowed in unmanaged state - print warning to manage the resource prior deletion

Additional info:
logically there is no need to unmanage resource in order to delete it because it can be deleted while managed (either running, stopped, failed). while resource is unmanaged we still expect cluster to keep monitoring the resource unless manually disabled. yet we allow to delete it in such a state which is bit contradicting - at same time  allow to delete it from cib but we keep it in running config

Comment 10 RHEL Program Management 2021-08-20 07:27:18 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 11 bugzilla 2023-08-16 22:02:24 UTC
In case anyone else hits this, here is the work-around:

`pcs resource refresh <resource_name>`