Description of problem: ----------------------- During the testing for RHHI4V-Ready, Marko has hit the problem with NVMe drives. Bricks created with md devices on top of NVMe drives, hit the "bio too big" problem Version-Release number of selected component (if applicable): ------------------------------------------------------------- RHHI 2.0 kernel-3.10.0-863.el7 How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create software RAID with 'md' over NVMe drives 2. Use this brick for engine volume Actual results: ---------------- Error while creating bricks with md layer on top of NVMe drives Expected results: ----------------- No errors Additional info: ------------------ Thanks Marko for the information <snip> [48684.118540] bio too big device dm-14 (504 > 256) [48684.178821] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 51000 128 [48684.183512] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 51000 128 [48684.188373] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 51000 128 [49010.568086] bio too big device dm-14 (512 > 256) [49010.568124] Buffer I/O error on dev dm-18, logical block 40959815, lost async page write [49010.568168] Buffer I/O error on dev dm-18, logical block 40959816, lost async page write [49010.568192] Buffer I/O error on dev dm-18, logical block 40959817, lost async page write [49010.568217] Buffer I/O error on dev dm-18, logical block 40959818, lost async page write [49010.568247] Buffer I/O error on dev dm-18, logical block 40959819, lost async page write [49010.568270] Buffer I/O error on dev dm-18, logical block 40959820, lost async page write [49010.568294] Buffer I/O error on dev dm-18, logical block 40959821, lost async page write [49010.568317] Buffer I/O error on dev dm-18, logical block 40959822, lost async page write [49010.568341] Buffer I/O error on dev dm-18, logical block 40959823, lost async page write [49010.568365] Buffer I/O error on dev dm-18, logical block 40959824, lost async page write [49388.815989] ovirtmgmt: port 2(vnet0) entered disabled state [49388.818310] device vnet0 left promiscuous mode [49388.818321] ovirtmgmt: port 2(vnet0) entered disabled state [49510.213142] ovirtmgmt: port 2(vnet0) entered blocking state [49510.213146] ovirtmgmt: port 2(vnet0) entered disabled state [49510.213238] device vnet0 entered promiscuous mode [49510.213813] ovirtmgmt: port 2(vnet0) entered blocking state [49510.213817] ovirtmgmt: port 2(vnet0) entered forwarding state [49697.210495] bio too big device dm-14 (504 > 256) [49697.258032] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 61240 128 [49697.263049] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 61240 128 [49697.267329] md/raid0:md127: make_request bug: can't convert block across chunks or bigger than 512k 61240 128 </snip>
(In reply to SATHEESARAN from comment #0) > Description of problem: > ----------------------- > During the testing for RHHI4V-Ready, Marko has hit the problem with NVMe > drives. Bricks created with md devices on top of NVMe drives, hit the "bio > too big" problem > If an option, you may work around this md/raid0 bug by deploying LVM striped LV(s) across the NVMe backing devices instead.
Moving to ON_QA as the dependent fix is available in RHEL 7.6
Tested with RHV 4.2.8 async. 1. Created a md RAID0 2. Use this software RAID volume for lvmcache. No problems observed and all looks good.