RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 722965 - drbd-rgmananger rpm package depends on Red Hat Cluster Suite version 2 but works on RHCS 3
Summary: drbd-rgmananger rpm package depends on Red Hat Cluster Suite version 2 but wo...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: resource-agents
Version: 6.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: LINBIT
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-07-18 15:50 UTC by Victor Ramirez
Modified: 2013-04-09 19:25 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-04-09 19:25:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Victor Ramirez 2011-07-18 15:50:53 UTC
Description of problem:

drbd-rgmananger rpm package depends on Red Hat Cluster Suite version 2 but works on RHCS 3. To get it working, I built the drbd package using the following commands:

# cd drbd-8.3.8.1/
# ./configure
# make rpm RPMOPT="--with rgmanager"

To install, I had to use the "--nodeps" option to ignore the dependency on RHCS version 2.x:

# rpm -Uvh --nodeps ~/rpmbuild/RPMS/x86_64/drbd-rgmanager-8.3.8.1-1.el6.x86_64.rpm

Unfortunately, the resource agent did not work out of the box. Perhaps I should have added a delay in the cluster.conf, but I added "sleep 1" in the beginning of the resource agent /usr/share/cluster/drbd.sh

Failover works!!!

The point is that I was tempted to use gfs2 over dual primary drbd precisely because of the drbd-rgmanager package installation obstacles and the fact that I could not find anybody else who had tried the resource agent on RHCS 3. Users should not be tempted to use dual primary drbd unless strictly necessary, since they (as I did) will struggle with frustrating split brain issues.
 

Version-Release number of selected component (if applicable):

 - drbd 8.3.8.1 source
 - RHEL 6.0 and 6.1
 - Red Hat Cluster Suite 3.0.12-22.el6

Comment 1 Victor Ramirez 2011-07-18 15:55:05 UTC
P.S. I understand that is a drbd bug, not a RHCS bug. Not withstanding, I was advised to file the bug here as it would be routed to the drbd development team.

Comment 3 Madison Kelly 2011-07-18 16:12:10 UTC
Could someone at RH or Linbit test this bug against the official DRBD RPM?

Comment 5 Fabio Massimo Di Nitto 2013-04-09 19:25:58 UTC
Linbit has not responded to this BZ for over 1.5 years. closing, RH does not have any way to reproduce the issue to verify where the problem is.


Note You need to log in before you can comment on or make changes to this bug.