Description of problem: Previous versions of Cockpit-installer for RHHI allowed partitions (e.g. /dev/sdb2, /dev/sdc4) to be used when creating bricks. A recent change breaks the previous behavior. How reproducible: Install via cockpit and on Step #3 input a partition (/dev/sdb1) instead of the entire device (/dev/sdb) Actual results: An error is returned that 4k devices aren't supported Expected results: Brick is created from the partition Additional info: Not sure where the upstream code is, but this is where I found the offending check in RHVH4.3 image [root@localhost gluster.features]# grep -A7 'logical block size of 512B' ./roles/gluster_hci/tasks/prerequisites.yml - name: Check if disks have logical block size of 512B else fail command: cat /sys/block/{{item.pvname|basename}}/queue/logical_block_size register: logical_blk_size failed_when: logical_blk_size.stdout|int != 512 when: gluster_infra_volume_groups is defined and item.pvname is not search("/dev/mapper") and gluster_features_512B_check|default(true) with_items: "{{ gluster_infra_volume_groups }}"
Please link to gluster-ansible bug
Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686359 and https://bugzilla.redhat.com/show_bug.cgi?id=1714781 Cover this bug. - name: Check if disks have logical block size of 512B else fail command: cat /sys/block/{{item.pvname|basename}}/queue/logical_block_size register: logical_blk_size failed_when: logical_blk_size.stdout|int != 512 when: gluster_infra_volume_groups is defined and item.pvname is not search("/dev/mapper") and gluster_features_512B_check|default(true) with_items: "{{ gluster_infra_volume_groups }}" Is now replaced with: --- a/playbooks/hc-ansible-deployment/tasks/gluster_deployment.yml +++ b/playbooks/hc-ansible-deployment/tasks/gluster_deployment.yml @@ -49,8 +49,8 @@ # logical block size of 512 bytes. To disable the check set # gluster_features_512B_check to false. DELETE the below task once # OVirt limitation is fixed - - name: Check if disks have logical block size of 512B else fail - command: cat /sys/block/{{item.pvname|basename}}/queue/logical_block_size + - name: Check if disks have logical block size of 512B + command: blockdev --getss {{ item.pvname }} register: logical_blk_size when: gluster_infra_volume_groups is defined and item.pvname is not search("/dev/mapper") and @@ -67,6 +67,25 @@ loop: "{{ logical_blk_size.results }}" loop_control: label: "Logical Block Size" + + - name: Check logical block size of VDO devices + command: blockdev --getss {{ item.device }} + register: logical_blk_size + when: gluster_infra_vdo is defined and + gluster_features_512B_check|default(true) + with_items: "{{ gluster_infra_vdo }}" + + - name: Check if logical block size is 512 bytes + assert: + that: + - "item.stdout|int == 512" + fail_msg: "The logical block size of disk is not 512 bytes" + when: gluster_infra_vdo is defined and + gluster_features_512B_check|default(true) + loop: "{{ logical_blk_size.results }}" + loop_control: + label: "Logical Block Size" PR: https://github.com/gluster/gluster-ansible/pull/70/files
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1 with: gluster-ansible-features-1.0.5-2.el7rhgs.noarch gluster-ansible-roles-1.0.5-2.el7rhgs.noarch gluster-ansible-infra-1.0.4-3.el7rhgs.noarch When checking for 512B, the validation fails
Error message: fatal: [host1.lab.eng.blr.redhat.com]: FAILED! => {"msg": "The conditional check 'item.stdout|int == 512' failed. The error was: error while evaluating conditional (item.stdout|int == 512): 'dict object' has no attribute 'stdout'"}
(In reply to SATHEESARAN from comment #7) > Error message: > > fatal: [host1.lab.eng.blr.redhat.com]: FAILED! => {"msg": "The conditional > check 'item.stdout|int == 512' failed. The error was: error while evaluating > conditional (item.stdout|int == 512): 'dict object' has no attribute > 'stdout'"} Ack this is when a combination of vdo and non-vdo devices are used. Patch posted here: https://github.com/gluster/gluster-ansible/pull/74
Tested with RHEL 7.7 ( 3.10.0-1059.el7.x86_64 ) with RHGS 3.5.0 interim build ( glusterfs-6.0-7 ) with gluster-ansible builds: gluster-ansible-features-1.0.5-2.el7rhgs.noarch gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-3.el7rhgs.noarch gluster-ansible-roles-1.0.5-4.el7rhgs.noarch gluster-ansible-cluster-1.0-1.el7rhgs.noarch Able to complete deployment with partitions. sdr 65:16 0 223.1G 0 disk ├─sdr1 65:17 0 110G 0 part │ └─gluster_vg_sdr1-gluster_lv_engine 253:3 0 100G 0 lvm /gluster_bricks/engine └─sdr2 65:18 0 110G 0 part ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2_tmeta 253:4 0 1G 0 lvm │ └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2-tpool 253:6 0 108G 0 lvm │ ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2 253:7 0 108G 0 lvm │ └─gluster_vg_sdr2-gluster_lv_commserve_vol 253:8 0 100G 0 lvm /gluster_bricks/commserve_vol └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2_tdata 253:5 0 108G 0 lvm └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2-tpool 253:6 0 108G 0 lvm ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2 253:7 0 108G 0 lvm └─gluster_vg_sdr2-gluster_lv_commserve_vol 253:8 0 100G 0 lvm /gluster_bricks/commserve_vol
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2557