Bug 2166683 - [RFE] Add new MD RAID resource agent (i.e. successor to upstream 'Raid1' agent) (RHEL9)
Summary: [RFE] Add new MD RAID resource agent (i.e. successor to upstream 'Raid1' agen...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: resource-agents
Version: 9.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 9.3
Assignee: Oyvind Albrigtsen
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On: 1810577 1741644 1810561
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-02-02 16:08 UTC by Oyvind Albrigtsen
Modified: 2023-08-10 15:40 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 1810577
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-147477 0 None None None 2023-02-02 16:12:24 UTC
Red Hat Knowledge Base (Article) 2912911 0 None None None 2023-02-02 16:58:35 UTC
Red Hat Knowledge Base (Solution) 3977931 0 None None None 2023-02-02 16:58:54 UTC

Description Oyvind Albrigtsen 2023-02-02 16:08:25 UTC
+++ This bug was initially created as a clone of Bug #1810577 +++

Description of problem:
The upstream resource-agents package contains a functionally limited 'Raid1' resource agent relative to supporting MD Clustered RAID1/10.  This is the feature bz to track the development of a 'Raid1' cloned pacemaker resource agent with cluster enhancements.  For instance, 'Raid1' has no notion of an MD array being clustered or not, thus can't reject creation of a respective resource properly or automatically adjust to it.

Version-Release number of selected component (if applicable):
All

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
See bz dependency for initial use/test case examples (subject to be copied and enhanced in this bz later)

--- Additional comment from Heinz Mauelshagen on 2020-03-13 17:31:32 CET ---

Including test cases from closed 1810561 here.

Test cases:
- active-passive
  - create MD resources for all levels (0/1/4/5/6/10) with force_clones=false
  - check they get started properly
  - put (single host) filesystems (xfs, ext4, gfs2) on top and load with I/O
  - define ordering constraints for ^
  - move them to other node manually
  - disable/enable them
  - add/remove legs to/from raid1
  - fail/fence a node and check if they get started correctly
    on another online node
  - fail access to MD array leg(s) on its active node
    and analyse MD/resource/filesystem behaviour
  - reboot whole cluster and check for resource properly started
  - update the resource option force_clones to true
  - clone the resource (should fail on single host MD arrays but it doesn't)
  - test for data corruption  (even with gfs2 on top)
  - remove all resources in/out of order and check for proper removal

- active-active
  - create clustered MD resources for levels 1 and 10
  - check they get started properly
  - clone them
  - check they get started properly on all other online nodes
  - add/remove legs to/from RAID1
  - try reshaping RAID10
  - disable/enable them
  - fail a node and check if they get started correctly on another node
  - fail access to MD array leg(s) on any/multiple nodes
    and analyse MD/resource/filesystem behaviour
  - remove all resources in/out of order and check for proper removal

--- Additional comment from Oyvind Albrigtsen on 2020-04-15 16:44:14 CEST ---

https://github.com/ClusterLabs/resource-agents/pull/1481


Note You need to log in before you can comment on or make changes to this bug.