Bug 1557434
Summary: | bio too big device md0 (1024 > 256) | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | John Pittman <jpittman> | ||||||
Component: | kernel | Assignee: | Ming Lei <minlei> | ||||||
kernel sub component: | RAID | QA Contact: | guazhang <guazhang> | ||||||
Status: | CLOSED ERRATA | Docs Contact: | |||||||
Severity: | urgent | ||||||||
Priority: | urgent | CC: | dblack, dhoward, djeffery, guazhang, hartsjc, jbrassow, jmoyer, jpittman, kevin, lvm-team, mkarg, msnitzer, mtowey, ncroxon, xni | ||||||
Version: | 7.6 | Keywords: | ZStream | ||||||
Target Milestone: | rc | ||||||||
Target Release: | --- | ||||||||
Hardware: | All | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | kernel-3.10.0-863.el7 | Doc Type: | If docs needed, set a value | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | |||||||||
: | 1568070 (view as bug list) | Environment: | |||||||
Last Closed: | 2018-10-30 08:49:46 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1568070, 1626477 | ||||||||
Attachments: |
|
Description
John Pittman
2018-03-16 15:04:08 UTC
(In reply to John Pittman from comment #0) > Description of problem: > > During nvme hot swap events, customer is hitting 'bio too big' error. > > [Fri Mar 9 12:34:51 2018] bio too big device md0 (1024 > 256) > [Fri Mar 9 12:34:51 2018] bio too big device md0 (824 > 256) > [Fri Mar 9 12:34:51 2018] bio too big device md0 (1024 > 256) > [Fri Mar 9 12:34:51 2018] bio too big device md0 (1024 > 256) > > vg03-fast--tmp 253 3 L--w 1 1 0 LVM-<omitted> > vg03-mysql 253 2 L--w 1 1 0 LVM-<omitted> > > Personalities : [raid10] > md0 : active raid10 nvme0n1[4] nvme2n1[2] nvme1n1[1] nvme3n1[3] > 1562561536 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] > bitmap: 0/12 pages [0KB], 65536KB chunk > > dm-2, max_sectors_kb: 128 > dm-3, max_sectors_kb: 128 > md0, max_sectors_kb: 128 > nvme0n1, max_sectors_kb: 128 > nvme1n1, max_sectors_kb: 512 > nvme2n1, max_sectors_kb: 512 > nvme3n1, max_sectors_kb: 512 > > dm-2, max_hw_sectors_kb: 128 > dm-3, max_hw_sectors_kb: 128 > md0, max_hw_sectors_kb: 128 > nvme0n1, max_hw_sectors_kb: 128 > nvme1n1, max_hw_sectors_kb: 2147483647 > nvme2n1, max_hw_sectors_kb: 2147483647 > nvme3n1, max_hw_sectors_kb: 2147483647 > > dm-2, max_segments: 33 > dm-3, max_segments: 33 > md0, max_segments: 33 > nvme0n1, max_segments: 33 > nvme1n1, max_segments: 65535 > nvme2n1, max_segments: 65535 > nvme3n1, max_segments: 65535 > > 84:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd > Device [144d:a822] (rev 01) (prog-if 02 [NVM Express]) > 85:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd > NVMe SSD Controller 171X [144d:a820] (rev 03) (prog-if 02 [NVM Express]) > 86:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd > NVMe SSD Controller 171X [144d:a820] (rev 03) (prog-if 02 [NVM Express]) > 87:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd > NVMe SSD Controller 171X [144d:a820] (rev 03) (prog-if 02 [NVM Express]) > > Version-Release number of selected component (if applicable): > > 3.10.0-514.6.1.el7 > > How reproducible: > > Hot swap nvme devices Could you share us how to hot swap nvme devices? I plan to try to reproduce this issue in my environment. Thanks, Created attachment 1410798 [details]
Queue values
Ming, sorry for the delay. I was out yesterday. I can reproduce using a KVM virt with three scsi devices and one usb. They have different queue values so you just have to add the usb device last.
[root@localhost ~]# ls -lah /dev/disk/by-path
total 0
drwxr-xr-x. 2 root root 200 Mar 20 16:04 .
drwxr-xr-x. 6 root root 120 Mar 20 16:04 ..
lrwxrwxrwx. 1 root root 9 Mar 20 16:04 pci-0000:00:01.1-ata-1.0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Mar 20 16:04 pci-0000:00:01.1-ata-1.0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Mar 20 16:04 pci-0000:00:01.1-ata-1.0-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 9 Mar 20 16:04 pci-0000:00:01.1-ata-1.1 -> ../../sr0
lrwxrwxrwx. 1 root root 9 Mar 20 16:04 pci-0000:00:06.7-usb-0:1:1.0-scsi-0:0:0:0 -> ../../sde
lrwxrwxrwx. 1 root root 9 Mar 20 16:04 virtio-pci-0000:00:08.0-scsi-0:0:0:0 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Mar 20 16:04 virtio-pci-0000:00:08.0-scsi-0:0:0:1 -> ../../sdd
lrwxrwxrwx. 1 root root 9 Mar 20 16:04 virtio-pci-0000:00:08.0-scsi-0:0:0:2 -> ../../sdc
[root@localhost ~]# mdadm --create /dev/md1 --metadata=1.2 --raid-devices=3 --level=raid1 /dev/sdb /dev/sdd /dev/sdc
[root@localhost ~]# vgcreate testvg /dev/md1
[root@localhost ~]# lvcreate -l 100%FREE -n testlv testvg
[root@localhost ~]# mkfs.ext4 /dev/testvg/testlv
[root@localhost ~]# mkdir /test
[root@localhost ~]# mount /dev/mapper/testvg-testlv /test
[root@localhost ~]# while true ; do dd if=/dev/zero of=/test/testfile bs=1M count=450 && sleep 3 ; done &
[root@localhost ~]# mdadm --grow /dev/md1 --level=1 --raid-devices=4 --add /dev/sde
I tried at 7.3, the latest 7.4, and with the below patch (combination of both the ones you supplied), but the issue persisted:
[john@dhcp145-120 kernel]$ cat linux-kernel-test.patch
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -94,6 +94,11 @@ queue_ra_store(struct request_queue *q, const char *page, size_t count)
return ret;
}
+static ssize_t queue_chunk_sectors_show(struct request_queue *q, char *page)
+{
+ return queue_var_show(q->limits.chunk_sectors, page);
+}
+
static ssize_t queue_max_sectors_show(struct request_queue *q, char *page)
{
int max_sectors_kb = queue_max_sectors(q) >> 1;
@@ -329,6 +334,11 @@ static struct queue_sysfs_entry queue_ra_entry = {
.store = queue_ra_store,
};
+static struct queue_sysfs_entry queue_chunk_sectors_entry = {
+ .attr = {.name = "chunk_sectors", .mode = S_IRUGO },
+ .show = queue_chunk_sectors_show,
+};
+
static struct queue_sysfs_entry queue_max_sectors_entry = {
.attr = {.name = "max_sectors_kb", .mode = S_IRUGO | S_IWUSR },
.show = queue_max_sectors_show,
@@ -446,6 +456,7 @@ static struct attribute *default_attrs[] = {
&queue_requests_entry.attr,
&queue_ra_entry.attr,
&queue_max_hw_sectors_entry.attr,
+ &queue_chunk_sectors_entry.attr,
&queue_max_sectors_entry.attr,
&queue_max_segments_entry.attr,
&queue_max_integrity_segments_entry.attr,
--- a/fs/bio.c
+++ b/fs/bio.c
@@ -813,7 +813,8 @@ int bio_add_page(struct bio *bio, struct page *page, unsigned int len,
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
unsigned int max_sectors;
- max_sectors = blk_max_size_offset(q, bio->bi_sector);
+ max_sectors = min(blk_queue_get_max_sectors(q, bio->bi_rw),
+ blk_max_size_offset(q, bio->bi_sector));
if ((max_sectors < (len >> 9)) && !bio->bi_size)
max_sectors = len >> 9;
While the loop was going and errors were occurring, I took the full queue values. They are attached.
With the same set of steps on 4.16.0-rc6+ errors did not occur This issue reproduced in (In reply to John Pittman from comment #5) > Created attachment 1410798 [details] > Queue values > > Ming, sorry for the delay. I was out yesterday. I can reproduce using a > KVM virt with three scsi devices and one usb. They have different queue > values so you just have to add the usb device last. > > [root@localhost ~]# ls -lah /dev/disk/by-path > total 0 > drwxr-xr-x. 2 root root 200 Mar 20 16:04 . > drwxr-xr-x. 6 root root 120 Mar 20 16:04 .. > lrwxrwxrwx. 1 root root 9 Mar 20 16:04 pci-0000:00:01.1-ata-1.0 -> > ../../sda > lrwxrwxrwx. 1 root root 10 Mar 20 16:04 pci-0000:00:01.1-ata-1.0-part1 -> > ../../sda1 > lrwxrwxrwx. 1 root root 10 Mar 20 16:04 pci-0000:00:01.1-ata-1.0-part2 -> > ../../sda2 > lrwxrwxrwx. 1 root root 9 Mar 20 16:04 pci-0000:00:01.1-ata-1.1 -> > ../../sr0 > lrwxrwxrwx. 1 root root 9 Mar 20 16:04 > pci-0000:00:06.7-usb-0:1:1.0-scsi-0:0:0:0 -> ../../sde > lrwxrwxrwx. 1 root root 9 Mar 20 16:04 > virtio-pci-0000:00:08.0-scsi-0:0:0:0 -> ../../sdb > lrwxrwxrwx. 1 root root 9 Mar 20 16:04 > virtio-pci-0000:00:08.0-scsi-0:0:0:1 -> ../../sdd > lrwxrwxrwx. 1 root root 9 Mar 20 16:04 > virtio-pci-0000:00:08.0-scsi-0:0:0:2 -> ../../sdc > > [root@localhost ~]# mdadm --create /dev/md1 --metadata=1.2 --raid-devices=3 > --level=raid1 /dev/sdb /dev/sdd /dev/sdc > [root@localhost ~]# vgcreate testvg /dev/md1 > [root@localhost ~]# lvcreate -l 100%FREE -n testlv testvg > [root@localhost ~]# mkfs.ext4 /dev/testvg/testlv > [root@localhost ~]# mkdir /test > [root@localhost ~]# mount /dev/mapper/testvg-testlv /test > [root@localhost ~]# while true ; do dd if=/dev/zero of=/test/testfile bs=1M > count=450 && sleep 3 ; done & > [root@localhost ~]# mdadm --grow /dev/md1 --level=1 --raid-devices=4 --add > /dev/sde > > I tried at 7.3, the latest 7.4, and with the below patch (combination of > both the ones you supplied), but the issue persisted: The above one is caused by the following: 1) MD1's queue max_sectors_kb is 512 after MD1 is created 2) LVM device sees the underlying MD1's queue max_sectors_kb is 512, and respects the limit; 3) inside MD1's make request, for incoming bio, it simply clones a new bio and not use bio_add_page() for considering this queue limit of max_sectors_kb 3) MD1's queue max_sectors_kb becomes 120 after the USB disk is added to MD1, but LVM doesn't know the change at all 4) so this warning is triggered No such issue on upstream kernel, because 54efd50bfd873e2dbf7(block: make generic_make_request handle arbitrarily sized bios) can deal with this issue well. And We can't backport this big change for avoiding to break kABI. CC MD & DM guys. One solution is to change MD's IO path to consider this limit when bio_add_page() is bypassed, such as use bio_clone_mddev(). Or any other ideas? Thanks, Created attachment 1411964 [details]
patch for fixing RADI1 only
Hi John Pittman, Could you test the attached patch of 'patch for fixing RADI1 only' to see if this issue can be fixed? BTW: 1) now this patch only dumps one message of 'bio too big device md1 (256 > 240)', and the warning should have been triggered on MD only. If you find this kind of log on other underlying disks, this patch should be wrong. Otherwise, it will be fine if function is correct. 2) other MDs may have similar issue, and need this kind of change too Xiao Ni and Nigel, could you take a look at this idea and see if it is correct and one accepted solution? Thanks, (In reply to Ming Lei from comment #9) > Hi John Pittman, > > Could you test the attached patch of 'patch for fixing RADI1 only' to see > if this issue can be fixed? > > BTW: > > 1) now this patch only dumps one message of 'bio too big device md1 (256 > > 240)', > and the warning should have been triggered on MD only. If you find this kind > of > log on other underlying disks, this patch should be wrong. Otherwise, it will > be fine if function is correct. > > 2) other MDs may have similar issue, and need this kind of change too > > > Xiao Ni and Nigel, could you take a look at this idea and see if it is > correct > and one accepted solution? Hi Ming For md the patch is ok. As you said, I'm not sure whether it affects other components. Maybe we need more sounds from more people. Thanks Xiao This would need to be tested by our QA team. -Nigel Thanks a lot Ming, results are below. journalctl: Mar 23 12:34:30 localhost.localdomain kernel: md: recovery of RAID array md1 Mar 23 12:34:34 localhost.localdomain kernel: bio too big device md1 (1008 > 240) Mar 23 12:35:46 localhost.localdomain kernel: md: md1: recovery done. dmesg: [ 90.098214] md/raid1:md1: active with 3 out of 3 mirrors [ 90.098274] md1: detected capacity change from 0 to 1072693248 [ 90.098747] md: resync of RAID array md1 [ 103.142336] md: md1: resync done. [ 137.986393] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) [ 164.009037] md: recovery of RAID array md1 [ 168.046693] bio too big device md1 (1008 > 240) [ 240.148897] md: md1: recovery done. [root@localhost ~]# cat /proc/mounts | grep test /dev/mapper/testvg-testlv /test ext4 rw,seclabel,relatime,data=ordered 0 0 If we take this patch, maybe we should add to the 'bio too big' message indicating that we are adjusting to it, otherwise the customer may think it's unrecoverable. Maybe something like: 'bio too big device md1 (1008 > 240)...adjusting sector count'. Also, the customer is at raid10, so the issue is at least present there as well. Ming, I did some further testing. In the 7.4 kernel linear shows the issue as well. However, at 7.4 & 7.5, I could not test raid4, 5, and 6 because they all panic/hang at raid5_get_active_stripe. As Nigel mentioned, there will need to be further testing once the raid5_get_active_stripe issue is fixed. My 7.5 tests were in 3.10.0-861.el7. Correction: I have not seen the issue at 7.4 for raid456. (In reply to John Pittman from comment #12) > Thanks a lot Ming, results are below. > > journalctl: > Mar 23 12:34:30 localhost.localdomain kernel: md: recovery of RAID array md1 > Mar 23 12:34:34 localhost.localdomain kernel: bio too big device md1 (1008 > > 240) > Mar 23 12:35:46 localhost.localdomain kernel: md: md1: recovery done. > > dmesg: > [ 90.098214] md/raid1:md1: active with 3 out of 3 mirrors > [ 90.098274] md1: detected capacity change from 0 to 1072693248 > [ 90.098747] md: resync of RAID array md1 > [ 103.142336] md: md1: resync done. > [ 137.986393] EXT4-fs (dm-2): mounted filesystem with ordered data mode. > Opts: (null) > [ 164.009037] md: recovery of RAID array md1 > [ 168.046693] bio too big device md1 (1008 > 240) > [ 240.148897] md: md1: recovery done. > > [root@localhost ~]# cat /proc/mounts | grep test > /dev/mapper/testvg-testlv /test ext4 rw,seclabel,relatime,data=ordered 0 0 From the above dmesg log, looks no such issue any more, could you let us know if the test result is expected? I mean if the raid1 functions well from user-view. > > If we take this patch, maybe we should add to the 'bio too big' message > indicating that we are adjusting to it, otherwise the customer may think > it's unrecoverable. Maybe something like: 'bio too big device md1 (1008 > > 240)...adjusting sector count'. That seems good idea, so that user won't report it as real issue. > > Also, the customer is at raid10, so the issue is at least present there as > well. Yeah, I just want to make sure if this approach does work on raid1. If it works, we will investigate if the similar approach can be applied to other RAIDs. So please focus on raid1 test now, since this patch only fixes raid1. Thanks, Thanks Ming. As far as I can tell, from a user standpoint, it works well. It prints the message, the md recovery completes successfully, and I'm able to send I/O to the md device throughout the whole process. Is there any indication as to whether this fix can make it into 7.5 or not? (In reply to John Pittman from comment #17) > Is there any indication as to whether this fix can make it into 7.5 or not? It may be too late for 7.5, I guess. As pointed by Xiao Ni, there isn't such issue on RAID0, and I have figured out how to do the similar thing on RAID10. Still not sure if raid5 need this kind of fix. Xiao Ni & Nigel, Cold you take a look at raid5 and see if the similar fix is needed for raid5? And except for raid0, raid1, raid10 and raid5, is there other RAIDs which need to be handled wrt. this issue? Thanks Setting needsinfo per Ming's last comment Yes it is too late for RHEL7.5 -Nigel Re-setting needsinfo for the below query from Ming.
>
> Xiao Ni & Nigel,
>
> Cold you take a look at raid5 and see if the similar fix is needed for raid5?
>
> And except for raid0, raid1, raid10 and raid5, is there other RAIDs which
> need
> to be handled wrt. this issue?
>
> Thanks
Patch(es) committed on kernel repository and an interim kernel build is undergoing testing Patch(es) available on kernel-3.10.0-863.el7 Hello
[root@storageqe ~]# ls -lah /dev/disk/by-path
total 0
drwxr-xr-x. 2 root root 300 Jun 7 14:28 .
drwxr-xr-x. 5 root root 100 Jun 7 14:24 ..
lrwxrwxrwx. 1 root root 9 Jun 7 14:24 pci-0000:00:01.1-ata-1.0 -> ../../sr0
lrwxrwxrwx. 1 root root 9 Jun 7 14:24 pci-0000:00:04.7-usb-0:1:1.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx. 1 root root 9 Jun 7 14:24 pci-0000:00:06.0 -> ../../vda
lrwxrwxrwx. 1 root root 10 Jun 7 14:24 pci-0000:00:06.0-part1 -> ../../vda1
lrwxrwxrwx. 1 root root 10 Jun 7 14:24 pci-0000:00:06.0-part2 -> ../../vda2
lrwxrwxrwx. 1 root root 10 Jun 7 14:24 pci-0000:00:06.0-part3 -> ../../vda3
lrwxrwxrwx. 1 root root 9 Jun 7 14:28 pci-0000:00:08.0-scsi-0:0:0:0 -> ../../sdd
lrwxrwxrwx. 1 root root 9 Jun 7 14:27 pci-0000:00:09.0-scsi-0:0:0:4 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Jun 7 14:28 pci-0000:00:09.0-scsi-0:0:0:5 -> ../../sdc
lrwxrwxrwx. 1 root root 9 Jun 7 14:24 virtio-pci-0000:00:06.0 -> ../../vda
lrwxrwxrwx. 1 root root 10 Jun 7 14:24 virtio-pci-0000:00:06.0-part1 -> ../../vda1
lrwxrwxrwx. 1 root root 10 Jun 7 14:24 virtio-pci-0000:00:06.0-part2 -> ../../vda2
lrwxrwxrwx. 1 root root 10 Jun 7 14:24 virtio-pci-0000:00:06.0-part3 -> ../../vda3
[root@storageqe ~]# mdadm --create /dev/md1 --metadata=1.2 --raid-devices=3 --level=linear /dev/sdb /dev/sdd /dev/sdc
mdadm: array /dev/md1 started.
[root@storageqe ~]#
[root@storageqe ~]# vgcreate testvg /dev/md1
Physical volume "/dev/md1" successfully created.
Volume group "testvg" successfully created
[root@storageqe ~]# lvcreate -l 100%FREE -n testlv testvg
Logical volume "testlv" created.
[root@storageqe ~]# mkfs.ext4 /dev/testvg/testlv
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
983040 inodes, 3928064 blocks
196403 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
120 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@storageqe ~]# mkdir /test
[root@storageqe ~]# mount /dev/mapper/testvg-testlv /test
[root@storageqe ~]# while true ; do dd if=/dev/zero of=/test/testfile bs=1M && sleep 3 ;rm -rf /test/testfile done &
> ^C
[root@storageqe ~]# while true ; do dd if=/dev/zero of=/test/testfile bs=1M && sleep 3 ;rm -rf /test/testfile ; done &
[1] 1281
[root@storageqe ~]# mdadm --grow /dev/md1 --add /dev/sda
[root@storageqe ~]# cat /proc/mdstat
Personalities : [linear]
md1 : active linear sdd[3] sdb[2] sdc[1] sda[0]
20952064 blocks super 1.2 0k rounding
unused devices: <none>
[root@storageqe ~]#
the 'bio too big' error don't reproduce, move to verified
Are there plans to get this pushed to the upstream kernel? (In reply to kevin lyda from comment #41) > Are there plans to get this pushed to the upstream kernel? It isn't needed in upstream Linux kernel: No such issue on upstream kernel, because 54efd50bfd873e2dbf7 ("block: make generic_make_request handle arbitrarily sized bios") can deal with this issue well. The patch for this bug has broken dm-raid in zstream, see https://bugzilla.redhat.com/show_bug.cgi?id=1598587 . The following patch from https://bugzilla.redhat.com/show_bug.cgi?id=1581845 is needed to fix it: http://post-office.corp.redhat.com/archives/rhkernel-list/2018-May/msg06317.html You will want to pick this RHEL only patch too for the zstream: commit 35e034b1bd6883ffdc0020e0b160f6839155ab34 Author: Nigel Croxon <ncroxon> Date: Mon Jul 2 20:06:24 2018 -0400 [md] raid10 set default value for max_sectors RHEL7 ONLY Patch The patch f07ee2ca89eeec2447eb447bcc4d38fcc41cb77f ([md] avoid NULL dereference to queue pointer) forgets to set the default value to max_sectors. It adds a judgement if(mddev->queue). If mddev->queue is null, max_sectors doesn't have a default value. This patch sets a default value. Signed-off-by: Xiao Ni <xni> Signed-off-by: Nigel Croxon <ncroxon> Signed-off-by: Bruno E. O. Meneguele <bmeneg> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index b7818d8..2c48e25 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1424,6 +1424,8 @@ retry_write: if (mddev->queue) max_sectors = min((int)blk_queue_get_max_sectors(mddev->queue, bio->bi_rw), r10_bio->sectors); + else + max_sectors = r10_bio->sectors; for (i = 0; i < conf->copies; i++) { int d = r10_bio->devs[i].devnum; Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:3083 *** Bug 1626479 has been marked as a duplicate of this bug. *** |