Bug 1077888 - pacemaker HA samba with CTDB required fixes
Summary: pacemaker HA samba with CTDB required fixes
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Oyvind Albrigtsen
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Keywords:
Depends On: 1435708
Blocks: 1301878
TreeView+ depends on / blocked
 
Reported: 2014-03-18 19:11 UTC by David Vossel
Modified: 2017-08-02 07:00 UTC (History)
20 users (show)

(edit)
Full support for CTDB resource agent

The CTDB resource agent used to implement a Samba deployment is now supported in Red Hat Enterprise Linux.
Clone Of: 1077887
(edit)
Last Closed: 2017-08-01 14:55:11 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1844 normal SHIPPED_LIVE resource-agents bug fix and enhancement update 2017-08-01 17:49:20 UTC

Description David Vossel 2014-03-18 19:11:41 UTC
+++ This bug was initially created as a clone of Bug #1077887 +++

Description of problem:

The CTDB agent used to implement an HA samba deployment is broken in rhel. This agent needs a series of upstream patches.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.setup CTDB with defaults using pacemaker
2.CTDB resource will fail because default directories and socket locations are wrong.

Actual results:

ctdb fails

Expected results:

ctdb works with default values

Additional info:

The work required to implement this is already done upstream in these two pull requests.

https://github.com/ClusterLabs/resource-agents/pull/376
https://github.com/ClusterLabs/resource-agents/pull/397

Comment 4 Abhijith Das 2014-03-20 05:30:09 UTC
Justin Payne has been testing clustered samba on rhel7 by manually configuring/managing ctdb and says that the setup works as expected.

I've spoken to Steven Levine about documenting the use-case and he says it's possible in the 7.0 timeframe if we can give him an outline of how to set this use-case up. What we need to do is to determine if the ctdb resource agent in pacemaker can be fixed. If not, manually managing ctdb is still an option we can go with. This is what users used to do with rhel6 anyway.

Comment 32 Vlad Ionescu 2015-09-20 18:34:52 UTC
All GlusterFS documentation, including Red Hat's own Up and Running with oVirt [1], uses CTDB to setup HA for Gluster.

This seems like a trivial issue, with an available upstream fix, yet it has prevented CTDB from being usable from the RHEL 7 repositories for 1.5 years.

Can we please get this fixed? I am willing to contribute if I can move this forward in any way. Would love to help!

[1] https://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part-two/

Comment 33 Oyvind Albrigtsen 2017-03-28 14:39:38 UTC
Build to fix --logfile/--syslog being replaced with --logging.

Comment 36 Steven J. Levine 2017-05-09 22:32:47 UTC
Oyvind:

(After discussion with Chris Feist...)

I put this in needinfo from you to be sure you see this.

I will not have documentation ready for the full deployment of a Samba cluster in time for the RHEL 7.4 Beta, but it does seem a good idea to note support in the release notes for the now-fixed CTDB agent, which is what I have added to the doc text field here (a bare minimum). This is just a summary of what's in the release, and if somebody needs this feature they should be able to contact support.

The plans/hopes are for me to have a full procedure for RHEL 7.4 GA, at which point we can link to that documentation in the release note. But for now I'm just mentioning that it is supported.

If that is an issue for you, let me (and Chris) know.  But we should have this for the GA in a few months.

Steven

Comment 39 Justin Payne 2017-05-26 21:28:44 UTC
Verified in resource-agents-3.9.5-99.el7:

[root@host-008 ~]# rpm -q resource-agents
resource-agents-3.9.5-99.el7.x86_64
[root@host-008 ~]# pcs cluster status
Cluster Status:
 Stack: corosync
 Current DC: host-009 (version 1.1.16-9.el7-94ff4df) - partition with quorum
 Last updated: Fri May 26 15:17:33 2017
 Last change: Fri May 26 14:34:04 2017 by root via cibadmin on host-008
 2 nodes configured
 6 resources configured

PCSD Status:
  host-008: Online
  host-009: Online
[root@host-008 ~]# pcs property set no-quorum-policy=freeze
[root@host-008 ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
Error: 'dlm' already exists
[root@host-008 ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
[root@host-008 ~]# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
[root@host-008 ~]# pcs constraint order start dlm-clone then clvmd-clone
Adding dlm-clone clvmd-clone (kind: Mandatory) (Options: first-action=start then-action=start)
[root@host-008 ~]# pcs constraint colocation add clvmd-clone with dlm-clone
[root@host-008 ~]# pvcreate /dev/sda1
  Physical volume "/dev/sda1" successfully created.
[root@host-008 ~]# vgcreate -Ay -cy csmb_vg /dev/sda1
  Clustered volume group "csmb_vg" successfully created
[root@host-008 ~]# lvcreate -L1G -n ctdb_lv csmb_vg
  Logical volume "ctdb_lv" created.
[root@host-008 ~]# mkfs.gfs2 -j2 -p lock_dlm -t STSRHTS1596:ctdb /dev/csmb_vg/ctdb_lv 
/dev/csmb_vg/ctdb_lv is a symbolic link to /dev/dm-2
This will destroy any data on /dev/dm-2
Are you sure you want to proceed? [y/n] y
Discarding device contents (may take a while on large devices): Done
Adding journals: Done 
Building resource groups: Done 
Creating quota file: Done
Writing superblock and syncing: Done
Device:                    /dev/csmb_vg/ctdb_lv
Block size:                4096
Device size:               1.00 GB (262144 blocks)
Filesystem size:           1.00 GB (262142 blocks)
Journals:                  2
Resource groups:           5
Locking protocol:          "lock_dlm"
Lock table:                "STSRHTS1596:ctdb"
UUID:                      2328c298-f6a4-4d3e-9e8a-c93b6e7ad0ff
[root@host-008 ~]# pvcreate /dev/sda2
  Physical volume "/dev/sda2" successfully created.
[root@host-008 ~]# vgcreate -Ay -cy csmb_vg2 /dev/sda2
  Clustered volume group "csmb_vg2" successfully created
[root@host-008 ~]# lvcreate -L4G -n csmb_lv1 csmb_vg2  
  Logical volume "csmb_lv1" created.
[root@host-008 ~]# mkfs.gfs2 -j2 -p lock_dlm -t STSRHTS1596:csmb1 /dev/csmb_vg2/csmb_lv1 
/dev/csmb_vg2/csmb_lv1 is a symbolic link to /dev/dm-3
This will destroy any data on /dev/dm-3
Are you sure you want to proceed? [y/n] y
Discarding device contents (may take a while on large devices): Done
Adding journals: Done 
Building resource groups: Done   
Creating quota file: Done
Writing superblock and syncing: Done
Device:                    /dev/csmb_vg2/csmb_lv1
Block size:                4096
Device size:               4.00 GB (1048576 blocks)
Filesystem size:           4.00 GB (1048575 blocks)
Journals:                  2
Resource groups:           17
Locking protocol:          "lock_dlm"
Lock table:                "STSRHTS1596:csmb1"
UUID:                      fe01bdbd-a57b-417e-88e6-41eda5347191
[root@host-008 ~]# lvcreate -L4G -n csmb_lv2 csmb_vg2
  Logical volume "csmb_lv2" created.
[root@host-008 ~]# mkfs.gfs2 -j2 -p lock_dlm -t STSRHTS1596:csmb2 /dev/csmb_vg2/csmb_lv2
/dev/csmb_vg2/csmb_lv2 is a symbolic link to /dev/dm-4
This will destroy any data on /dev/dm-4
Are you sure you want to proceed? [y/n] y
Discarding device contents (may take a while on large devices): Done
Adding journals: Done 
Building resource groups: Done   
Creating quota file: Done
Writing superblock and syncing: Done
Device:                    /dev/csmb_vg2/csmb_lv2
Block size:                4096
Device size:               4.00 GB (1048576 blocks)
Filesystem size:           4.00 GB (1048575 blocks)
Journals:                  2
Resource groups:           17
Locking protocol:          "lock_dlm"
Lock table:                "STSRHTS1596:csmb2"
UUID:                      d846573c-0d3a-46d5-b0d0-c14874ce67eb
[root@host-008 ~]# pcs resource create ctdb_fs Filesystem device="/dev/csmb_vg/ctdb_lv" directory="/mnt/ctdb" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
Assumed agent name 'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')
[root@host-008 ~]# pcs resource create csmb_fs1 Filesystem device="/dev/csmb_vg2/csmb_lv1" directory="/mnt/share1" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
Assumed agent name 'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')
[root@host-008 ~]# pcs resource create csmb_fs2 Filesystem device="/dev/csmb_vg2/csmb_lv2" directory="/mnt/share2" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
Assumed agent name 'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')
[root@host-008 ~]# mount | grep gfs2
/dev/mapper/csmb_vg-ctdb_lv on /mnt/ctdb type gfs2 (rw,relatime,seclabel)
/dev/mapper/csmb_vg2-csmb_lv1 on /mnt/share1 type gfs2 (rw,relatime,seclabel)
/dev/mapper/csmb_vg2-csmb_lv2 on /mnt/share2 type gfs2 (rw,relatime,seclabel)
[root@host-008 ~]# pcs constraint order start clvmd-clone then ctdb_fs-clone
Adding clvmd-clone ctdb_fs-clone (kind: Mandatory) (Options: first-action=start then-action=start)
[root@host-008 ~]# pcs constraint colocation add ctdb_fs-clone with clvmd-clone
[root@host-008 ~]# pcs constraint order start clvmd-clone then csmb_fs1-clone
Adding clvmd-clone csmb_fs1-clone (kind: Mandatory) (Options: first-action=start then-action=start)
[root@host-008 ~]# pcs constraint colocation add csmb_fs1-clone with clvmd-clone
[root@host-008 ~]# pcs constraint order start clvmd-clone then csmb_fs2-clone
Adding clvmd-clone csmb_fs2-clone (kind: Mandatory) (Options: first-action=start then-action=start)
[root@host-008 ~]# pcs constraint colocation add csmb_fs2-clone with clvmd-clone
[root@host-008 ~]# chmod 777 /mnt/share1 /mnt/share2
[root@host-008 ~]# systemctl start ctdb
[root@host-008 ~]# ctdb status
Number of nodes:2
pnn:0 10.15.105.8      OK (THIS NODE)
pnn:1 10.15.105.9      OK
Generation:836078595
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:0

Comment 40 errata-xmlrpc 2017-08-01 14:55:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1844


Note You need to log in before you can comment on or make changes to this bug.