Bug 1855945 - RHHI-V deployment fails when using multipath configuration and lvm cache
Summary: RHHI-V deployment fails when using multipath configuration and lvm cache
Keywords:
Status: CLOSED DUPLICATE of bug 1851114
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.7
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Ritesh Chikatwar
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: RHHI-V_1.8_Release_Notes
TreeView+ depends on / blocked
 
Reported: 2020-07-11 07:19 UTC by SATHEESARAN
Modified: 2020-12-07 07:24 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
During the deployment of RHHI for Virtualization with multipath device names, volume groups(VG) and logical volumes(LV) are created with the suffix of WWID leading to LV names longer than 128 characters. This results in failure of LV cache creation. To work around this issue, When initiating RHHI for Virtualization deployment with multipath device names as /dev/mapper/<WWID>, replace VG and thinpool suffix with last 4 digits of WWID as: 1. During deployment from the web console, provide a multipath device name as /dev/mapper/<WWID> for bricks. 2. Click Next to generate an inventory file. 3. Login in to the deployment node via SSH. 4. Find the <WWID> with LVM components: # grep vg /etc/ansible/hc_wizard_inventory.yml 5. For all WWIDs, replace WWID with the last 4 digits of WWID. # sed -i 's/<WWID>/<last-4-digit-WWID>/g' /etc/ansible/hc_wizard_inventory.yml 6. Continue deployment from web console.
Clone Of:
Environment:
Last Closed: 2020-12-07 07:24:18 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2020-07-11 07:19:21 UTC
Description of problem:
------------------------
When using multipath names for disks the disk name will be used as /dev/mapper/<WWID>. For example. /dev/mapper/360030480197f830125618b0119fc270d. When using this as disk, the LV names are also use this WWID.

When using LV cache for the bricks, the LV names for cache also becomes too long.
Max allowed LV name is 128 and in this specific case, the LV name exceeds 128

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHEL 8.2
RHHI-V 1.8

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Start RHHI-V deployment from the web console
2. Use the multipath device name for disks (i.e) /dev/mapper/<WWID>
3. Enable LV cache and mention the LV device name too as /dev/mapper/<WWID>
4. Start RHHI-V deployment

Actual results:
----------------
Deployment fails with error LV name more than 128 chars


Expected results:
-----------------
Deployment should be successful

Comment 2 SATHEESARAN 2020-07-11 07:24:22 UTC
The source of this problem is the name of the device.
If the device name is WWID, then use the last 4 digits to create names of LVM component - VG, thinpool

For example, if using device name as /dev/mapper/360030480197f830125618b0119fc270d
then VG name should be 'gluster_vg_270d' and thinpool should be 'gluster_thinpool_270d'

Comment 3 SATHEESARAN 2020-07-11 07:52:22 UTC
Marking this bug as known_issue for RHHI-V 1.8

Workaround here is:

1. Start RHHI-V deployment from web console
2. In the bricks tab, input the device name  as /dev/mapper/<WWID> format
3. Proceed to next tab to generate inventory file and hold this deployment till the following tasks are completed
4. Login in to deployment node (web console for deployment is launched from this node) via SSH
5. Find out all devices using WWID
       # grep vg /etc/ansible/hc_wizard_inventory.yml
6. For all the unique WWID listed with VG, replace them in to last 4 digits
       # sed -i 's/<WWID>/<last-4-digit-WWID>/g' /etc/ansible/hc_wizard_inventory.yml
7. On the last tab of the deployment, click on 'Reload'
8. Complete deployment from web console


Note that this problem is only for multipath configuration enabled and
LV cache enabled

Comment 8 Gobinda Das 2020-12-07 07:24:18 UTC
Same issue we are fixing on https://bugzilla.redhat.com/show_bug.cgi?id=1851114, so closing this as duplicate.

*** This bug has been marked as a duplicate of bug 1851114 ***


Note You need to log in before you can comment on or make changes to this bug.