Bug 1626477 - Bricks created with md devices up on NVMe drives doesn't work
Summary: Bricks created with md devices up on NVMe drives doesn't work
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHHI-V 1.5.z Async
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1557434 1626479
Blocks: RHHIV-1.5.z-Backlog-BZs
TreeView+ depends on / blocked
 
Reported: 2018-09-07 12:31 UTC by SATHEESARAN
Modified: 2019-05-20 04:55 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-20 04:55:45 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2018-09-07 12:31:48 UTC
Description of problem:
-----------------------
During the testing for RHHI4V-Ready, Marko has hit the problem with NVMe drives. Bricks created with md devices on top of NVMe drives, hit the "bio too big" problem

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHHI 2.0
kernel-3.10.0-863.el7

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Create software RAID with 'md' over NVMe drives
2. Use this brick for engine volume

Actual results:
----------------
Error while creating bricks with md layer on top of NVMe drives

Expected results:
-----------------
No errors


Additional info:
------------------
Thanks Marko for the information

<snip>
[48684.118540] bio too big device dm-14 (504 > 256)
[48684.178821] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 51000 128
[48684.183512] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 51000 128
[48684.188373] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 51000 128
[49010.568086] bio too big device dm-14 (512 > 256)
[49010.568124] Buffer I/O error on dev dm-18, logical block 40959815, lost async page write
[49010.568168] Buffer I/O error on dev dm-18, logical block 40959816, lost async page write
[49010.568192] Buffer I/O error on dev dm-18, logical block 40959817, lost async page write
[49010.568217] Buffer I/O error on dev dm-18, logical block 40959818, lost async page write
[49010.568247] Buffer I/O error on dev dm-18, logical block 40959819, lost async page write
[49010.568270] Buffer I/O error on dev dm-18, logical block 40959820, lost async page write
[49010.568294] Buffer I/O error on dev dm-18, logical block 40959821, lost async page write
[49010.568317] Buffer I/O error on dev dm-18, logical block 40959822, lost async page write
[49010.568341] Buffer I/O error on dev dm-18, logical block 40959823, lost async page write
[49010.568365] Buffer I/O error on dev dm-18, logical block 40959824, lost async page write
[49388.815989] ovirtmgmt: port 2(vnet0) entered disabled state
[49388.818310] device vnet0 left promiscuous mode
[49388.818321] ovirtmgmt: port 2(vnet0) entered disabled state
[49510.213142] ovirtmgmt: port 2(vnet0) entered blocking state
[49510.213146] ovirtmgmt: port 2(vnet0) entered disabled state
[49510.213238] device vnet0 entered promiscuous mode
[49510.213813] ovirtmgmt: port 2(vnet0) entered blocking state
[49510.213817] ovirtmgmt: port 2(vnet0) entered forwarding state
[49697.210495] bio too big device dm-14 (504 > 256)
[49697.258032] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 61240 128
[49697.263049] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 61240 128
[49697.267329] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 61240 128
</snip>

Comment 1 Heinz Mauelshagen 2018-09-10 12:47:58 UTC
(In reply to SATHEESARAN from comment #0)
> Description of problem:
> -----------------------
> During the testing for RHHI4V-Ready, Marko has hit the problem with NVMe
> drives. Bricks created with md devices on top of NVMe drives, hit the "bio
> too big" problem
> 

If an option, you may work around this md/raid0 bug by deploying LVM striped LV(s) across the NVMe backing devices instead.

Comment 2 Sahina Bose 2018-11-19 05:22:39 UTC
Moving to ON_QA as the dependent fix is available in RHEL 7.6

Comment 5 SATHEESARAN 2019-02-05 16:50:57 UTC
Tested with RHV 4.2.8 async.

1. Created a md RAID0
2. Use this software RAID volume for lvmcache.

No problems observed and all looks good.


Note You need to log in before you can comment on or make changes to this bug.