Bug 1627563 - [RHEL 7.7] raid10 kernel NULL pointer dereference in md_do_sync during raid creation
Summary: [RHEL 7.7] raid10 kernel NULL pointer dereference in md_do_sync during raid c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel
Version: 7.6
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: rc
: 7.7
Assignee: Nigel Croxon
QA Contact: ChanghuiZhong
Jaroslav Klech
URL:
Whiteboard:
Depends On:
Blocks: 1622032 1636482 1655046
TreeView+ depends on / blocked
 
Reported: 2018-09-10 21:34 UTC by Corey Marthaler
Modified: 2019-08-06 12:10 UTC (History)
17 users (show)

Fixed In Version: kernel-3.10.0-970.el7
Doc Type: Bug Fix
Doc Text:
Branch prediction of ternary operators no longer causes a system panic Previously, the branch prediction of ternary operators caused that the compiler incorrectly called the `blk_queue_nonrot()` function before checking the `mddev->queue` structure. As a consequence, the system panicked. With this update, checking `mddev->queue` and then calling `blk_queue_nonrot()` prevents the bug from appearing. As a result, the system no longer panics in the described scenario.
Clone Of:
: 1655046 (view as bug list)
Environment:
Last Closed: 2019-08-06 12:09:43 UTC
Target Upstream Version:


Attachments (Terms of Use)
[md} fix faster resync regression patch (1015 bytes, text/plain)
2018-09-11 00:35 UTC, Heinz Mauelshagen
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:2029 None None None 2019-08-06 12:10:24 UTC

Description Corey Marthaler 2018-09-10 21:34:58 UTC
Description of problem:

Latest kernel-3.10.0-947 

hayes-02: lvcreate  --type raid5 -R 512.00k -i 2 -n takeover -L 2.75G centipede2


[   20.800034] type=1305 audit(1536614381.692:4): audit_pid=1184 old=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s1
[   20.833784] BUG: unable to handle kernel NULL pointer dereference at 0000000000000440
[   20.842545] IP: [<ffffffff8178f9d1>] md_do_sync+0xc31/0x1190
[   20.848880] PGD 0
[   20.851132] Oops: 0000 [#1] SMP
[   20.854753] Modules linked in: sunrpc dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx sb_edae
[   20.934401] CPU: 5 PID: 1165 Comm: mdX_resync Not tainted 3.10.0-947.el7.x86_64 #1
[   20.942841] Hardware name: Dell Inc. PowerEdge R830/0VVT0H, BIOS 1.7.1 01/29/2018
[   20.951193] task: ffff8eb3b7eb30c0 ti: ffff8eb3af394000 task.ti: ffff8eb3af394000
[   20.959543] RIP: 0010:[<ffffffff8178f9d1>]  [<ffffffff8178f9d1>] md_do_sync+0xc31/0x1190
[   20.968585] RSP: 0018:ffff8eb3af397cb0  EFLAGS: 00010246
[   20.974511] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000470500806c00
[   20.982473] RDX: 00000000000000c0 RSI: 00000000000000c0 RDI: ffffffff81f58660
[   20.990436] RBP: ffff8eb3af397e48 R08: ffffffff81f58670 R09: 0000000000000000
[   20.998399] R10: 0000000000000003 R11: 0000000000000000 R12: 0000000000000000
[   21.006363] R13: ffff8eb3b72e9058 R14: 000000000000be00 R15: 20c49ba5e353f7cf
[   21.014325] FS:  0000000000000000(0000) GS:ffff8ed3bde80000(0000) knlGS:0000000000000000
[   21.023355] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   21.029766] CR2: 0000000000000440 CR3: 000000210b810000 CR4: 00000000003607e0
[   21.037730] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   21.045694] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   21.053656] Call Trace:
[   21.056392]  [<ffffffff812e004c>] ? update_curr+0x14c/0x1e0
[   21.062611]  [<ffffffff8178b6ad>] md_thread+0x16d/0x1e0
[   21.068444]  [<ffffffff8178b540>] ? find_pers+0x80/0x80
[   21.074277]  [<ffffffff812c1c91>] kthread+0xd1/0xe0
[   21.079711]  [<ffffffff812c1bc0>] ? insert_kthread_work+0x40/0x40
[   21.086515]  [<ffffffff81973c37>] ret_from_fork_nospec_begin+0x21/0x21
[   21.093804]  [<ffffffff812c1bc0>] ? insert_kthread_work+0x40/0x40

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Heinz Mauelshagen 2018-09-10 22:57:21 UTC
Narrowing down bug:

- tested kernel-3.10.0-944.el7 -> fine
- tested kernel-3.10.0-945.el7 -> crash as of initial description

$ git  diff --stat kernel-3.10.0-944.el7 kernel-3.10.0-945.el7 drivers/md
 drivers/md/md.c     | 6 ++++--
 drivers/md/raid10.c | 5 ++++-
 2 files changed, 8 insertions(+), 3 deletions(-)

md/raid10 is not being used in the "lvcreate --type raid5 ..." test.

git blame -> commit 58cdca88daf2423248c8491f0a46d76dc3d9d0d7.

Building test kernel with it reverted...

Comment 8 Heinz Mauelshagen 2018-09-11 00:35:01 UTC
Created attachment 1482236 [details]
[md} fix faster resync regression patch

mddev->queue can not be accessed unconditionally underneath device-mapper

Comment 13 Nigel Croxon 2018-09-11 18:20:45 UTC
Nacking Heinz's patch.

I am resubmitting my patch, that broke LVM.

-Nigel

Comment 18 Jonathan Earl Brassow 2018-10-04 15:08:38 UTC
(In reply to Nigel Croxon from comment #17)
> git log --pretty=oneline kernel-3.10.0-951.el7..kernel-3.10.0-954.el7
> 
> ...

Next time, it would be sufficient to simply say that there have been no MD changes since the last kernel that was validated.  (It's unlikely most folks would know why you are posting a long list of patches.)

We still need to get to the bottom of comment 16.

Comment 19 guazhang@redhat.com 2018-10-08 04:50:00 UTC
Hello

[root@storageqe-09 ~]# uname -a
Linux storageqe-09.lab.bos.redhat.com 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

[root@storageqe-09 ~]# vgcreate nvm /dev/sd[bcdef]
WARNING: dos signature detected on /dev/sdb at offset 510. Wipe it? [y/n]: y
  Wiping dos signature on /dev/sdb.
WARNING: dos signature detected on /dev/sdc at offset 510. Wipe it? [y/n]: y
  Wiping dos signature on /dev/sdc.
WARNING: dos signature detected on /dev/sdd at offset 510. Wipe it? [y/n]: y
  Wiping dos signature on /dev/sdd.
WARNING: dos signature detected on /dev/sde at offset 510. Wipe it? [y/n]: y
  Wiping dos signature on /dev/sde.
WARNING: dos signature detected on /dev/sdf at offset 510. Wipe it? [y/n]: y
  Wiping dos signature on /dev/sdf.
  Physical volume "/dev/sdb" successfully created.
  Physical volume "/dev/sdc" successfully created.
  Physical volume "/dev/sdd" successfully created.
  Physical volume "/dev/sde" successfully created.
  Physical volume "/dev/sdf" successfully created.
  Volume group "nvm" successfully created
[root@storageqe-09 ~]# lvcreate -y --ty raid5 -R 512k -i 2 -L 2.57G -n r nvm  
  Using default stripesize 64.00 KiB.
  Rounding up size to full physical extent 2.57 GiB
  Logical volume "r" created.
[root@storageqe-09 ~]# 

test passed with latest kernel, so will close it.

Comment 21 Corey Marthaler 2018-10-09 15:02:21 UTC
Yep, hit this last night with the latest kernel 3.10.0-957.el7.x86_64. Please don't close this bug in the future outside the verification process.

================================================================================
Iteration 0.20 started at Mon Oct  8 18:10:28 CDT 2018
================================================================================
Scenario kill_primary_non_synced_raid10_3legs: Kill primary leg of NON synced 3 leg raid10 volume(s)
********* RAID hash info for this scenario *********
* names:              non_synced_primary_raid10_3legs_1
* sync:               0
* type:               raid10
* -m |-i value:       3
* leg devices:        /dev/sdh1 /dev/sdd1 /dev/sdc1 /dev/sdb1 /dev/sdk1 /dev/sdo1
* spanned legs:       0
* manual repair:      0
* no MDA devices:     
* failpv(s):          /dev/sdh1
* failnode(s):        hayes-03
* lvmetad:            1
* raid fault policy:  warn
******************************************************

Creating raids(s) on hayes-03...
hayes-03: lvcreate  --type raid10 -i 3 -n non_synced_primary_raid10_3legs_1 -L 10G black_bird /dev/sdh1:0-3600 /dev/sdd1:0-3600 /dev/sdc1:0-3600 /dev/sdb1:0-3600 /dev/sdk1:0-3600 /dev/sdo1:0-3600

Current mirror/raid device structure(s):
  LV                                           Attr       LSize   Cpy%Sync Devices
   non_synced_primary_raid10_3legs_1            rwi-a-r--- <10.01g 0.00     non_synced_primary_raid10_3legs_1_rimage_0(0),non_synced_primary_raid10_3legs_1_rimage_1(0),non_synced_primary_raid10_3legs_1_rimage_2(0),non_synced_primary_raid10_3legs_1_rimage_3(0),non_synced_primary_raid10_3legs_1_rimage_4(0),non_synced_primary_raid10_3legs_1_rimage_5(0)
   [non_synced_primary_raid10_3legs_1_rimage_0] Iwi-aor---  <3.34g          /dev/sdh1(1)
   [non_synced_primary_raid10_3legs_1_rimage_1] Iwi-aor---  <3.34g          /dev/sdd1(1)
   [non_synced_primary_raid10_3legs_1_rimage_2] Iwi-aor---  <3.34g          /dev/sdc1(1)
   [non_synced_primary_raid10_3legs_1_rimage_3] Iwi-aor---  <3.34g          /dev/sdb1(1)
   [non_synced_primary_raid10_3legs_1_rimage_4] Iwi-aor---  <3.34g          /dev/sdk1(1)
   [non_synced_primary_raid10_3legs_1_rimage_5] Iwi-aor---  <3.34g          /dev/sdo1(1)
   [non_synced_primary_raid10_3legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdh1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdd1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdc1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdb1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_4]  ewi-aor---   4.00m          /dev/sdk1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_5]  ewi-aor---   4.00m          /dev/sdo1(0)

Creating xfs on top of mirror(s) on hayes-03...
Mounting mirrored xfs filesystems on hayes-03...

PV=/dev/sdh1
        non_synced_primary_raid10_3legs_1_rimage_0: 2
        non_synced_primary_raid10_3legs_1_rmeta_0: 2

Writing verification files (checkit) to mirror(s) on...
        ---- hayes-03 ----

Verifying files (checkit) on mirror(s) on...
        ---- hayes-03 ----

Current sync percent just before failure
        ( 11.28% )

Disabling device sdh on hayes-03rescan device...

Attempting I/O to cause mirror down conversion(s) on hayes-03
dd if=/dev/zero of=/mnt/non_synced_primary_raid10_3legs_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.0236945 s, 1.8 GB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  LV                                           Attr       LSize   Cpy%Sync Devices
   non_synced_primary_raid10_3legs_1            rwi-aor-p- <10.01g 77.66    non_synced_primary_raid10_3legs_1_rimage_0(0),non_synced_primary_raid10_3legs_1_rimage_1(0),non_synced_primary_raid10_3legs_1_rimage_2(0),non_synced_primary_raid10_3legs_1_rimage_3(0),non_synced_primary_raid10_3legs_1_rimage_4(0),non_synced_primary_raid10_3legs_1_rimage_5(0)
   [non_synced_primary_raid10_3legs_1_rimage_0] Iwi-aor-p-  <3.34g          [unknown](1)
   [non_synced_primary_raid10_3legs_1_rimage_1] Iwi-aor---  <3.34g          /dev/sdd1(1)
   [non_synced_primary_raid10_3legs_1_rimage_2] Iwi-aor---  <3.34g          /dev/sdc1(1)
   [non_synced_primary_raid10_3legs_1_rimage_3] Iwi-aor---  <3.34g          /dev/sdb1(1)
   [non_synced_primary_raid10_3legs_1_rimage_4] Iwi-aor---  <3.34g          /dev/sdk1(1)
   [non_synced_primary_raid10_3legs_1_rimage_5] Iwi-aor---  <3.34g          /dev/sdo1(1)
   [non_synced_primary_raid10_3legs_1_rmeta_0]  ewi-aor-p-   4.00m          [unknown](0)
   [non_synced_primary_raid10_3legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdd1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdc1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdb1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_4]  ewi-aor---   4.00m          /dev/sdk1(0)
   [non_synced_primary_raid10_3legs_1_rmeta_5]  ewi-aor---   4.00m          /dev/sdo1(0)

Verifying FAILED device /dev/sdh1 is *NOT* in the volume(s)
Verifying IMAGE device /dev/sdd1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdc1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdb1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdk1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdo1 *IS* in the volume(s)

Didn't receive heartbeat from hayes-03 for 120 seconds



[23877.450919] BUG: unable to handle kernel NULL pointer dereference at 0000000000000440^M
[23877.459679] IP: [<ffffffff85d8f8c3>] md_do_sync+0xc43/0x11b0^M
[23877.465999] PGD 0 ^M
[23877.468247] Oops: 0000 [#1] SMP ^M
[23877.471866] Modules linked in: dm_snapshot dm_bufio ext4 mbcache jbd2 kvdo(O) uds(O) raid1 raid10 dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx mxm_wmi iTCO_wdt iTCO_vendor_support dcdbas sunrpc sb_edac intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd pcspkr ipmi_si ipmi_devintf sg ipmi_msghandler mei_me lpc_ich mei dm_multipath dm_mod wmi acpi_power_meter ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic sr_mod cdrom mgag200 i2c_algo_bit drm_kms_helper qla2xxx syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ahci nvme_fc nvme_fabrics libahci nvme_core crct10dif_pclmul crct10dif_common libata drm_panel_orientation_quirks tg3 crc32c_intel scsi_transport_fc megaraid_sas ptp scsi_tgt pps_core^M
[23877.557042] CPU: 19 PID: 65027 Comm: mdX_resync Kdump: loaded Tainted: G           O   ------------   3.10.0-957.el7.x86_64 #1^M
Oct  8 18:11:50 [23877.569756] Hardware name: Dell Inc. PowerEdge R830/0VVT0H, BIOS 1.8.0 05/28/2018^M
[23877.579655] task: ffff8d00f2611040 ti: ffff8d00f3278000 task.ti: ffff8d00f3278000^M
[23877.588004] RIP: 0010:[<ffffffff85d8f8c3>]  [<ffffffff85d8f8c3>] md_do_sync+0xc43/0x11b0^M
[23877.597043] RSP: 0018:ffff8d00f327bcb0  EFLAGS: 00010246^M
[23877.602969] RAX: 0000000000000000 RBX: 0000000000000002 RCX: 000034e080807b00^M
[23877.610931] RDX: 00000000000000c0 RSI: 00000000000000c0 RDI: ffffffff86558660^M
[23877.618894] RBP: ffff8d00f327be48 R08: ffffffff86558670 R09: 0000000000000000^M
[23877.626855] R10: ffff8d00fb0af518 R11: 000000407fec9000 R12: 00000000010d7400^M
[23877.634816] R13: ffff8d00f8685058 R14: 0000000000000002 R15: 20c49ba5e353f7cf^M
[23877.642780] FS:  0000000000000000(0000) GS:ffff8d20fe040000(0000) knlGS:0000000000000000^M
[23877.651810] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[23877.658221] CR2: 0000000000000440 CR3: 0000002226210000 CR4: 00000000003607e0^M
[23877.666183] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
[23877.674144] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400^M
[23877.682106] Call Trace:^M
[23877.684849]  [<ffffffff858c2d00>] ? wake_up_atomic_t+0x30/0x30^M
[23877.691358]  [<ffffffff85d8b58d>] md_thread+0x16d/0x1e0^M
[23877.697188]  [<ffffffff85d8b420>] ? find_pers+0x80/0x80^M
[23877.703017]  [<ffffffff858c1c31>] kthread+0xd1/0xe0^M
[23877.708459]  [<ffffffff858c1b60>] ? insert_kthread_work+0x40/0x40^M
[23877.715260]  [<ffffffff85f74c37>] ret_from_fork_nospec_begin+0x21/0x21^M
[23877.722543]  [<ffffffff858c1b60>] ? insert_kthread_work+0x40/0x40^M
[23877.729341] Code: 39 d0 77 0e 49 83 bd 50 03 00 00 00 0f 84 f1 00 00 00 31 f6 4c 89 ef e8 cc c5 ff ff 85 c0 0f 85 df 00 00 00 49 8b 85 50 03 00 00 <48> 8b 80 40 04 00 00 f6 c4 10 0f 85 c8 00 00 00 bf f4 01 00 00 ^M
[23877.750983] RIP  [<ffffffff85d8f8c3>] md_do_sync+0xc43/0x11b0^M
[23877.757405]  RSP <ffff8d00f327bcb0>^M
[23877.761294] CR2: 0000000000000440^M

Comment 22 Jonathan Earl Brassow 2018-10-09 20:29:36 UTC
There was a similar problem to this one introduced by the fix for bug 1622032.  That has since been resolved and can no longer be the source of this bug.  The likelihood of hitting this bug is reduced, but not eliminated as can be seen by corey's recent comments.

I don't feel that this is a blocker for RHEL7.6.  However, I do feel that it is necessary to have a release note (known issue) and fix it ASAP in a zstream.

I am setting the flags to make that happen.

Comment 60 Bruno Meneguele 2018-11-30 12:05:20 UTC
Patch(es) committed on kernel-3.10.0-970.el7

Comment 64 ChanghuiZhong 2019-01-04 02:31:00 UTC
Hi,all

According to the recurring bugs step provided by @XiaoNi, I reproduced this problem in kernel-3.10.0-947.el7. And verified in kernel-3.10.0-970.el7 that this problem has been resolved, there is no "BUG: unable to handle kernel NULL pointer dereference at 0000000000000440" in the console.log.

Http://lab-02.rhts.eng.bos.redhat.com/beaker/logs/recipes/6358+/6358057/console.log


##########fixed kernel test######################

[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# uname -a
Linux storageqe-09.lab.bos.redhat.com 3.10.0-970.el7.x86_64 #1 SMP Thu Nov 29 21:07:56 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# vgcreate black_bird /dev/sd[b-g]
  Volume group "black_bird" successfully created
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# lvcreate --type raid10 -i 3 -n non_synced_primary_raid10_3legs_1 -L 10G black_bird /dev/sdb:0-3600 /dev/sdc:0-3600 /dev/sdd:0-3600 /dev/sde:0-3600 /dev/sdf:0-3600 /dev/sdg:0-3600
  Using default stripesize 64.00 KiB.
  Rounding size 10.00 GiB (2560 extents) up to stripe boundary size <10.01 GiB(2562 extents).
  Logical volume "non_synced_primary_raid10_3legs_1" created.
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# lsblk
NAME                                                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                       8:0    0 931.5G  0 disk 
├─sda1                                                    8:1    0     1G  0 part /boot
└─sda2                                                    8:2    0 930.5G  0 part 
  ├─rhel_storageqe--09-root                             253:0    0    50G  0 lvm  /
  ├─rhel_storageqe--09-swap                             253:1    0   7.8G  0 lvm  [SWAP]
  └─rhel_storageqe--09-home                             253:9    0 872.7G  0 lvm  /home
sdb                                                       8:16   0 931.5G  0 disk 
├─black_bird-non_synced_primary_raid10_3legs_1_rmeta_0  253:2    0     4M  0 lvm  
│ └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
└─black_bird-non_synced_primary_raid10_3legs_1_rimage_0 253:3    0   3.3G  0 lvm  
  └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
sdc                                                       8:32   0 931.5G  0 disk 
├─black_bird-non_synced_primary_raid10_3legs_1_rmeta_1  253:4    0     4M  0 lvm  
│ └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
└─black_bird-non_synced_primary_raid10_3legs_1_rimage_1 253:5    0   3.3G  0 lvm  
  └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
sdd                                                       8:48   0 931.5G  0 disk 
├─black_bird-non_synced_primary_raid10_3legs_1_rmeta_2  253:6    0     4M  0 lvm  
│ └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
└─black_bird-non_synced_primary_raid10_3legs_1_rimage_2 253:7    0   3.3G  0 lvm  
  └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
sde                                                       8:64   0 931.5G  0 disk 
├─black_bird-non_synced_primary_raid10_3legs_1_rmeta_3  253:8    0     4M  0 lvm  
│ └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
└─black_bird-non_synced_primary_raid10_3legs_1_rimage_3 253:10   0   3.3G  0 lvm  
  └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
sdf                                                       8:80   0 931.5G  0 disk 
├─black_bird-non_synced_primary_raid10_3legs_1_rmeta_4  253:11   0     4M  0 lvm  
│ └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
└─black_bird-non_synced_primary_raid10_3legs_1_rimage_4 253:12   0   3.3G  0 lvm  
  └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
sdg                                                       8:96   0   3.7T  0 disk 
├─black_bird-non_synced_primary_raid10_3legs_1_rmeta_5  253:13   0     4M  0 lvm  
│ └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
└─black_bird-non_synced_primary_raid10_3legs_1_rimage_5 253:14   0   3.3G  0 lvm  
  └─black_bird-non_synced_primary_raid10_3legs_1        253:15   0    10G  0 lvm  
sdh                                                       8:112  0   3.7T  0 disk 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# mkfs.xfs /dev/mapper/black_bird-non_synced_primary_raid10_3legs_1 -f
meta-data=/dev/mapper/black_bird-non_synced_primary_raid10_3legs_1 isize=512    agcount=16, agsize=163952 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2623232, imaxpct=25
         =                       sunit=16     swidth=96 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# mount /dev/mapper/black_bird-non_synced_primary_raid10_3legs_1 /mnt/test
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# cp mdadm-4.1-rc1_2.el8.x86_64.rpm /mnt/test
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# echo offline > /sys/block/sdg/device/state
[root@storageqe-09 ~]# dd if=/dev/zero of=/mnt/test/file1 bs=4M count=10
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.014623 s, 2.9 GB/s
[root@storageqe-09 ~]# echo running > /sys/block/sdg/device/state 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# vgs
  VG                #PV #LV #SN Attr   VSize    VFree 
  black_bird          6   1   0 wz--n-   <8.19t <8.17t
  rhel_storageqe-09   1   3   0 wz--n- <930.51g  4.00m
[root@storageqe-09 ~]# lvs
  LV                                VG                Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  non_synced_primary_raid10_3legs_1 black_bird        rwi-aor-r- <10.01g                                    100.00          
  home                              rhel_storageqe-09 -wi-ao---- 872.69g                                                    
  root                              rhel_storageqe-09 -wi-ao----  50.00g                                                    
  swap                              rhel_storageqe-09 -wi-ao----   7.81g                                                    
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# ls /mnt/test
test/     testarea/ tests/    
[root@storageqe-09 ~]# ls /mnt/test
file1  mdadm-4.1-rc1_2.el8.x86_64.rpm
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# umount /mnt/test
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# vgremove black_bird -f
  Logical volume "non_synced_primary_raid10_3legs_1" successfully removed
  Volume group "black_bird" successfully removed
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# 
[root@storageqe-09 ~]# vgs
  VG                #PV #LV #SN Attr   VSize    VFree
  rhel_storageqe-09   1   3   0 wz--n- <930.51g 4.00m
[root@storageqe-09 ~]# lvs
  LV   VG                Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home rhel_storageqe-09 -wi-ao---- 872.69g                                                    
  root rhel_storageqe-09 -wi-ao----  50.00g                                                    
  swap rhel_storageqe-09 -wi-ao----   7.81g                                                    
[root@storageqe-09 ~]# 

###########################test pass########################

Comment 65 Ben Arblaster 2019-01-28 14:00:03 UTC
Any ETA on a zstream for 7.6?

Comment 71 errata-xmlrpc 2019-08-06 12:09:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2029


Note You need to log in before you can comment on or make changes to this bug.