Bug 1674608 - Add check for 512b write suported on disks used for gluster bricks or not during gluster deployment
Summary: Add check for 512b write suported on disks used for gluster bricks or not dur...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhhiv-1.6
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: RHGS 3.4.z Batch Update 4
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1674606 1680597
TreeView+ depends on / blocked
 
Reported: 2019-02-11 16:42 UTC by Gobinda Das
Modified: 2019-03-27 03:44 UTC (History)
6 users (show)

Fixed In Version: gluster-ansible-features-1.0.4-4.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1674606
Environment:
Last Closed: 2019-03-27 03:44:39 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0661 0 None None None 2019-03-27 03:44:53 UTC

Description Gobinda Das 2019-02-11 16:42:54 UTC
+++ This bug was initially created as a clone of Bug #1674606 +++

Description of problem:
 Right now there is no validation for 512b write suported on disks used for gluster bricks or not during gluster deployment

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Red Hat Bugzilla Rules Engine on 2019-02-11 16:42:18 UTC ---

This bug is automatically being proposed for RHHI-V 1.6 release at Red Hat Hyperconverged Infrastructure for Virtualization product, by setting the release flag 'rhiv‑1.6' to '?'.

If this bug should be proposed for a different release, please manually change the proposed release flag.

Comment 2 Sachidananda Urs 2019-02-19 14:12:25 UTC
github.com/gluster/gluster-ansible-features/pull/20/commits/5c375168 fixes the issue.

Comment 4 SATHEESARAN 2019-02-23 11:40:58 UTC
This validation on the VDO disk failed and it blocks deployment

Validation is included for vdo devices and it errors out:
<snip>
failed: [dhcp37-124.lab.eng.blr.redhat.com] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/mapper/vdo_sdc'}) => {"changed": true, "cmd": ["cat", "/sys/block/vdo_sdc/queue/logical_block_size"], "delta": "0:00:01.006596", "end": "2019-02-23 12:57:13.896446", "failed_when_result": true, "item": {"pvname": "/dev/mapper/vdo_sdc", "vgname": "gluster_vg_sdc"}, "msg": "non-zero return code", "rc": 1, "start": "2019-02-23 12:57:12.889850", "stderr": "cat: /sys/block/vdo_sdc/queue/logical_block_size: No such file or directory", "stderr_lines": ["cat: /sys/block/vdo_sdc/queue/logical_block_size: No such file or directory"], "stdout": "", "stdout_lines": []}
failed: [dhcp37-138.lab.eng.blr.redhat.com] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/mapper/vdo_sdc'}) => {"changed": true, "cmd": ["cat", "/sys/block/vdo_sdc/queue/logical_block_size"], "delta": "0:00:01.009365", "end": "2019-02-23 12:57:14.234254", "failed_when_result": true, "item": {"pvname": "/dev/mapper/vdo_sdc", "vgname": "gluster_vg_sdc"}, "msg": "non-zero return code", "rc": 1, "start": "2019-02-23 12:57:13.224889", "stderr": "cat: /sys/block/vdo_sdc/queue/logical_block_size: No such file or directory", "stderr_lines": ["cat: /sys/block/vdo_sdc/queue/logical_block_size: No such file or directory"], "stdout": "", "stdout_lines": []}
</snip>

Comment 5 Sachidananda Urs 2019-02-28 06:53:46 UTC
My mistake, I missed the mix of vdo and non-vdo disks scenario.
Patch: https://github.com/gluster/gluster-ansible-features/pull/21/files  fixes the issue.

Comment 6 SATHEESARAN 2019-03-07 11:26:10 UTC
Tested with gluster-ansible-features-1.0.4-5

1. When the logical_size of the disk is not 512bytes, the validation fails
2. When the logical_size of the disk is 512bytes, then the validation succeeds, helping the complete deployment

Comment 8 errata-xmlrpc 2019-03-27 03:44:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0661


Note You need to log in before you can comment on or make changes to this bug.