Description of problem: ----------------------- User can enable dedupe & compression on select few or all the bricks on the same device. Currently the dedupe & compression will be enabled on all the bricks, that are created on the same device. But when the user has chosen to enable dedupe & compression on 2 bricks from the same device, 2 different VDO volumes are created. When the user has chosen to enable dedupe & compression on 3 bricks ( engine, vmstore, data ) from the same device, 3 different VDO volumes are created. Version-Release number of selected component (if applicable): -------------------------------------------------------------- How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Start RHHI installation from cockpit using 'Hosted Engine with Gluster' 2. On the bricks tab (4th tab), select 'Enable Dedupe & Compression' on all the 3 bricks 3. Observe the generated gdeploy config file Actual results: --------------- 1. Generated gdeploy config file has sections to create 3 different VDO volumes 2. Name of VDO volume is not generic Expected results: ----------------- 1. Generated gdeploy config file should have sections to create only one VDO volume 2. Name of the VDO volume should be generic corresponding to the device (e.g)vdovolume_sdb
Additional info: ---------------- Following is the snip of config file with dedupe & compression enabled on all the 3 bricks (ie.) engine, data, vmstore [vdo] action=create devices=sdb,sdb,sdb names=engine,data,vmstore logicalsize=400,2000,2000 Following is the snip of config file with dedupe & compression enabled on all the 2 bricks (ie.) engine, data [vdo] action=create devices=sdb,sdb names=engine,data logicalsize=400,2000 Irrespective of dedupe and compression enabled 1 or 2 or 3 bricks, the expected conf file should be: [vdo] action=create devices=sdb names=vdovol_sdb logicalsize=<size>
Tested with cockpit-ovirt-dashboard-0.11.22 There is only VDO volume created for all the bricks.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:3523