Bug 1286291 - [rgmanager]: defaults shall be located in metadata, not hidden in the code (netfs.sh)
[rgmanager]: defaults shall be located in metadata, not hidden in the code (n...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: resource-agents (Show other bugs)
6.5
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Oyvind Albrigtsen
cluster-qe@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-27 15:08 EST by Jan Pokorný
Modified: 2017-03-21 05:27 EDT (History)
5 users (show)

See Also:
Fixed In Version: resource-agents-3.9.5-37.el6
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-21 05:27:15 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jan Pokorný 2015-11-27 15:08:13 EST
Based on batch clufter testing, I've discovered that there is
an anti-pattern in netfs.sh agent, specifically in populate_defaults
function as it defeats the purpose of separating code and data values.
The latter should definitely go to metadata.  Amongst others, it has
an advantage of the value being easy to extract (to the benefit of
configuration management).

Optimal way to handle this is to specify the particular default
as a separate variable ("constant") in the script and refer to
it from within the metadata snippet to be emitted upon "meta-data"
request -- this is what heartbeat agents usually do.

I am intentionally leaving this bug with possibly a broader scope
as when solving this bug, it would make sense to check the other
rgmanager agents as well for the same anti-patterns...
Comment 2 Oyvind Albrigtsen 2015-12-14 07:12:18 EST
Moved fstype default to metadata and verified that it's still working.

https://github.com/ClusterLabs/resource-agents/pull/719
Comment 4 Mike McCune 2016-03-28 19:14:23 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 6 michal novacek 2017-02-01 05:07:56 EST
I have verified that the behaviour of netfs resource agent remains unchanged
after the patch in resource-agents-3.9.5-43

----
[root@virt-067 ~]# ccs --lsservices
localhost password: 
service: name=nfs-mountpoint, recovery=relocate
  netfs: ref=le-netfs
resources: 
  netfs: name=le-netfs, mountpoint=/mnt, host=virt-005, export=/mnt, force_unmount=1

[root@virt-067 ~]# clustat
Cluster Status for STSRHTS23364 @ Wed Feb  1 11:00:20 2017
Member Status: Quorate

 Member Name            ID   Status
 ------ ----            ---- ------
 virt-006                   1 Online, rgmanager        
 virt-007                   2 Online, rgmanager
 virt-008                   3 Online, rgmanager
 virt-009                   4 Online, rgmanager
 virt-013                   5 Online, rgmanager
 virt-014                   6 Online, rgmanager
 virt-016                   7 Online, rgmanager
 virt-018                   8 Online, rgmanager
 virt-056                   9 Online, rgmanager
 virt-057                  10 Online, rgmanager
 virt-058                  11 Online, rgmanager
 virt-059                  12 Online, rgmanager
 virt-060                  13 Online, rgmanager
 virt-061                  14 Online, rgmanager
 virt-062                  15 Online, rgmanager
 virt-067                  16 Online, Local, rgmanager

 Service Name            Owner (Last)                    State         
 ------- ----            ----- ------                    -----         
 service:nfs-mountpoint  virt-007                        started       

[root@virt-067 ~]# exit
logout
Connection to virt-067 closed.

[root@virt-006 cluster]# ssh virt-007
[root@virt-007 ~]# mount
...
virt-005:/mnt on /mnt type nfs (rw,sync,soft,noac,vers=4,addr=10.34.70.132,clientaddr=10.34.70.134)

[root@virt-007 ~]# clusvcadm -r nfs-mountpoint
Trying to relocate service:nfs-mountpoint...Success
service:nfs-mountpoint is now running on virt-062

[root@virt-007 ~]# ssh virt-062

[root@virt-062 ~]# mount 
...
virt-005:/mnt on /mnt type nfs (rw,sync,soft,noac,vers=4,addr=10.34.70.132,clientaddr=10.34.70.189)

[root@virt-062 ~]# clusvcadm -d nfs-mountpoint
Local machine disabling service:nfs-mountpoint...Success

[root@virt-062 ~]# mount
/dev/mapper/vg_virt062-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/vda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
none on /sys/kernel/config type configfs (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
Comment 8 errata-xmlrpc 2017-03-21 05:27:15 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0602.html

Note You need to log in before you can comment on or make changes to this bug.