Description of problem: A validation check during deployment set-up checks its not trying to install on a media device partition, per historic "recommendation" to not install on partitions. Any recommendation - including this one - if checked during install should NOT exit the install process. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: Doc: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html-single/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/index#task-config-rhgs-using-cockpit Device "Specify the raw device you want to use. Red Hat recommends an unpartitioned device"
Quick Update to the BZ : This change of issuing a warning and continuing with the installation is only for POCs. Production environments would still need to install on a full drive and not on partitions. This needs to be called out in the documentation as well as the warning text should clearly mention that this is only for POC and Production Environment would not be supported. @Sean would be providing more verbiage around this. I'm adding in this note so that there is clarity in development and test on how this BZ should be addressed. Marking a Need Info to Sean for the verbiage etc.
Dependent bug is in POST state
The dependent bug is already ON_QA, moving this bug to ON_QA
Tested with RHEL 7.7 ( 3.10.0-1059.el7.x86_64 ) with RHGS 3.5.0 interim build ( glusterfs-6.0-7 ) with gluster-ansible builds: gluster-ansible-features-1.0.5-2.el7rhgs.noarch gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-3.el7rhgs.noarch gluster-ansible-roles-1.0.5-4.el7rhgs.noarch gluster-ansible-cluster-1.0-1.el7rhgs.noarch Able to complete deployment with partitions. sdr 65:16 0 223.1G 0 disk ├─sdr1 65:17 0 110G 0 part │ └─gluster_vg_sdr1-gluster_lv_engine 253:3 0 100G 0 lvm /gluster_bricks/engine └─sdr2 65:18 0 110G 0 part ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2_tmeta 253:4 0 1G 0 lvm │ └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2-tpool 253:6 0 108G 0 lvm │ ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2 253:7 0 108G 0 lvm │ └─gluster_vg_sdr2-gluster_lv_commserve_vol 253:8 0 100G 0 lvm /gluster_bricks/commserve_vol └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2_tdata 253:5 0 108G 0 lvm └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2-tpool 253:6 0 108G 0 lvm ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2 253:7 0 108G 0 lvm └─gluster_vg_sdr2-gluster_lv_commserve_vol 253:8 0 100G 0 lvm /gluster_bricks/commserve_vol
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2963