Bug 1713816 - Ansible prerequisites.yml prevents bricks from being created on partitions. Only whole disks are allowed
Summary: Ansible prerequisites.yml prevents bricks from being created on partitions. ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhhiv-1.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.4.z Async Update
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1714781
TreeView+ depends on / blocked
 
Reported: 2019-05-24 23:05 UTC by John Call
Modified: 2019-10-03 07:58 UTC (History)
6 users (show)

Fixed In Version: gluster-ansible-roles-1.0.5-3, gluster-ansible-features-1.0.5-2
Doc Type: Bug Fix
Doc Text:
Previously, the sysfs pseudo file system was used to verify the logical block size of a disk being used as a brick. However, this failed when partitions of a disk were used instead of an entire disk, because the /sys/block/<diskname>/queue/logical_block_size file being checked relates only to whole disks. The output of the blockdev command is now used to verify logical block size instead, and deployment succeeds when either whole disks or partitions are used to create bricks.
Clone Of:
Environment:
Last Closed: 2019-10-03 07:58:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2557 0 None None None 2019-10-03 07:58:34 UTC

Description John Call 2019-05-24 23:05:24 UTC
Description of problem:
Previous versions of Cockpit-installer for RHHI allowed partitions (e.g. /dev/sdb2, /dev/sdc4) to be used when creating bricks.  A recent change breaks the previous behavior.


How reproducible:
Install via cockpit and on Step #3 input a partition (/dev/sdb1) instead of the entire device (/dev/sdb)


Actual results:
An error is returned that 4k devices aren't supported


Expected results:
Brick is created from the partition


Additional info:
Not sure where the upstream code is, but this is where I found the offending check in RHVH4.3 image

[root@localhost gluster.features]# grep -A7 'logical block size of 512B' ./roles/gluster_hci/tasks/prerequisites.yml
- name: Check if disks have logical block size of 512B else fail
  command: cat /sys/block/{{item.pvname|basename}}/queue/logical_block_size
  register: logical_blk_size
  failed_when: logical_blk_size.stdout|int != 512
  when: gluster_infra_volume_groups is defined and
    item.pvname is not search("/dev/mapper") and
    gluster_features_512B_check|default(true)
  with_items: "{{ gluster_infra_volume_groups }}"

Comment 2 Sahina Bose 2019-06-10 05:19:06 UTC
Please link to gluster-ansible bug

Comment 3 Sachidananda Urs 2019-06-10 12:42:40 UTC
Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686359 and 
https://bugzilla.redhat.com/show_bug.cgi?id=1714781

Cover this bug.

- name: Check if disks have logical block size of 512B else fail
  command: cat /sys/block/{{item.pvname|basename}}/queue/logical_block_size
  register: logical_blk_size
  failed_when: logical_blk_size.stdout|int != 512
  when: gluster_infra_volume_groups is defined and
    item.pvname is not search("/dev/mapper") and
    gluster_features_512B_check|default(true)
  with_items: "{{ gluster_infra_volume_groups }}"


Is now replaced with:

--- a/playbooks/hc-ansible-deployment/tasks/gluster_deployment.yml
+++ b/playbooks/hc-ansible-deployment/tasks/gluster_deployment.yml
@@ -49,8 +49,8 @@
     # logical block size of 512 bytes. To disable the check set
     # gluster_features_512B_check to false. DELETE the below task once
     # OVirt limitation is fixed
-    - name: Check if disks have logical block size of 512B else fail
-      command: cat /sys/block/{{item.pvname|basename}}/queue/logical_block_size
+    - name: Check if disks have logical block size of 512B
+      command: blockdev --getss {{ item.pvname }}
       register: logical_blk_size
       when: gluster_infra_volume_groups is defined and
             item.pvname is not search("/dev/mapper") and
@@ -67,6 +67,25 @@
       loop: "{{ logical_blk_size.results }}"
       loop_control:
         label: "Logical Block Size"
+
+    - name: Check logical block size of VDO devices
+      command: blockdev --getss {{ item.device }}
+      register: logical_blk_size
+      when: gluster_infra_vdo is defined and
+            gluster_features_512B_check|default(true)
+      with_items: "{{ gluster_infra_vdo }}"
+
+    - name: Check if logical block size is 512 bytes
+      assert:
+        that:
+          - "item.stdout|int == 512"
+        fail_msg: "The logical block size of disk is not 512 bytes"
+      when: gluster_infra_vdo is defined and
+            gluster_features_512B_check|default(true)
+      loop: "{{ logical_blk_size.results }}"
+      loop_control:
+        label: "Logical Block Size"


PR: https://github.com/gluster/gluster-ansible/pull/70/files

Comment 6 SATHEESARAN 2019-06-26 06:18:59 UTC
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1
with:
gluster-ansible-features-1.0.5-2.el7rhgs.noarch
gluster-ansible-roles-1.0.5-2.el7rhgs.noarch
gluster-ansible-infra-1.0.4-3.el7rhgs.noarch

When checking for 512B, the validation fails

Comment 7 SATHEESARAN 2019-06-26 06:49:22 UTC
Error message:

fatal: [host1.lab.eng.blr.redhat.com]: FAILED! => {"msg": "The conditional check 'item.stdout|int == 512' failed. The error was: error while evaluating conditional (item.stdout|int == 512): 'dict object' has no attribute 'stdout'"}

Comment 8 Sachidananda Urs 2019-06-26 08:52:58 UTC
(In reply to SATHEESARAN from comment #7)
> Error message:
> 
> fatal: [host1.lab.eng.blr.redhat.com]: FAILED! => {"msg": "The conditional
> check 'item.stdout|int == 512' failed. The error was: error while evaluating
> conditional (item.stdout|int == 512): 'dict object' has no attribute
> 'stdout'"}

Ack this is when a combination of vdo and non-vdo devices are used. Patch
posted here: https://github.com/gluster/gluster-ansible/pull/74

Comment 9 SATHEESARAN 2019-07-02 17:39:26 UTC
Tested with RHEL 7.7 ( 3.10.0-1059.el7.x86_64 ) with RHGS 3.5.0 interim build ( glusterfs-6.0-7 )
with gluster-ansible builds:
gluster-ansible-features-1.0.5-2.el7rhgs.noarch
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch
gluster-ansible-infra-1.0.4-3.el7rhgs.noarch
gluster-ansible-roles-1.0.5-4.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch


Able to complete deployment with partitions.

sdr                                                           65:16   0 223.1G  0 disk  
├─sdr1                                                        65:17   0   110G  0 part  
│ └─gluster_vg_sdr1-gluster_lv_engine                        253:3    0   100G  0 lvm   /gluster_bricks/engine
└─sdr2                                                        65:18   0   110G  0 part  
  ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2_tmeta   253:4    0     1G  0 lvm   
  │ └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2-tpool 253:6    0   108G  0 lvm   
  │   ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2     253:7    0   108G  0 lvm   
  │   └─gluster_vg_sdr2-gluster_lv_commserve_vol             253:8    0   100G  0 lvm   /gluster_bricks/commserve_vol
  └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2_tdata   253:5    0   108G  0 lvm   
    └─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2-tpool 253:6    0   108G  0 lvm   
      ├─gluster_vg_sdr2-gluster_thinpool_gluster_vg_sdr2     253:7    0   108G  0 lvm   
      └─gluster_vg_sdr2-gluster_lv_commserve_vol             253:8    0   100G  0 lvm   /gluster_bricks/commserve_vol

Comment 12 errata-xmlrpc 2019-10-03 07:58:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2557


Note You need to log in before you can comment on or make changes to this bug.