Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2088962

Summary: [RFE] oraasm: The oraasm resource agent should manage the specified Oracle ASM diskgroup (RHEL 9)
Product: Red Hat Enterprise Linux 9 Reporter: Reid Wahl <nwahl>
Component: resource-agentsAssignee: Oyvind Albrigtsen <oalbrigt>
Status: CLOSED MIGRATED QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: medium    
Version: 9.0CC: agk, cluster-maint, fdinitto, oalbrigt, sbradley
Target Milestone: rcKeywords: FutureFeature, MigratedToJIRA
Target Release: ---Flags: pm-rhel: mirror+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1947246 Environment:
Last Closed: 2023-09-22 19:37:20 UTC Type: Story
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1947246    
Bug Blocks:    

Description Reid Wahl 2022-05-22 00:31:41 UTC
+++ This bug was initially created as a clone of Bug #1947246 +++

Description of problem:

The oraasm resource agent accepts a diskgroup as a required parameter, but it does nothing to manage that diskgroup directly. It simply starts, stops, and monitors Oracle Clusterware High Availability Services (HAS) (via the ohasd service).

This is problematic for a couple of reasons:
  - It can return a false positive monitoring result when HAS is active but ASM is inactive or the specified diskgroup is not mounted. (See also BZ1786812.)
  - It does not allow the creation of multiple individual oraasm resource agents to manage multiple individual diskgroups. If a user creates two oraasm resource agents, then both of them will simply manage the ohasd service as a whole. This is much less useful than being able to manage one diskgroup per resource. It's also counter-intuitive, as the average user expects the resource to manage the diskgroup that it's configured to manage.

We should overhaul this resource agent, potentially creating a revised "oraasm2" to reflect the significant changes in behavior.

At a high level, what it ought to do appears to be pretty straightforward:
  - Start: Mount the specified diskgroup[1].
  - Stop: Unmount ("dismount" in Oracle's terminology) the specified diskgroup[1].
  - Monitor: If Oracle cluster services are running, check the status of the specified diskgroup (using the `asmcmd lsdg` command[2]).

IMO, the oraasm resource agent should never start or stop the ohasd (Oracle High Availability Services Daemon) service at all. Currently, **all** it does is start, stop, and monitor the ohasd service. It doesn't do anything specific to the diskgroup that it claims to manage. (A user could create a separate LSB-class resource to manage the ohasd service.)

Additionally, `diskgroup` is listed in the RA metadata as a required option. However, it's either ignored or practically ignored:
  - If `home` is set, then `diskgroup` is ignored entirely.
  - If `home` is not set, then `diskgroup` is used to look up a home directory from `/etc/oratab`. Then it's ignored after that.

The monitor operation (checking the status of a diskgroup) can be done with `asmcmd lsdg`. Mounting and unmounting have to be done via SQL commands. That's more complicated, but I hope we'd be able to re-use a lot of the logic from the oracle resource agent.

[1] https://docs.oracle.com/database/121/OSTMG/GUID-4782D609-766B-4686-B5E4-90A8EFC10DEA.htm#OSTMG94155
[2] https://docs.oracle.com/cd/B28359_01/server.111/b31107/asm_util.htm#OSTMG94273

-----

Version-Release number of selected component (if applicable):

resource-agents-4.1.1-68.el8

Comment 2 RHEL Program Management 2023-09-22 19:00:37 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 3 RHEL Program Management 2023-09-22 19:37:20 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.