Bug 1851451

Summary: Bus error in lvextend for RAID5 converted from RAID1
Product: Red Hat Enterprise Linux 8 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Heinz Mauelshagen <heinzm>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: high CC: agk, bugzilla.redhat, cluster-qe, heinzm, jbrassow, mcsontos, msnitzer, pasik, prajnoha, zkabelac
Version: 8.3Flags: pm-rhel: mirror+
Target Milestone: rc   
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.03.09-3.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1784351 Environment:
Last Closed: 2020-11-04 02:00:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1784351    
Bug Blocks:    

Description Corey Marthaler 2020-06-26 14:44:28 UTC
+++ This bug was initially created as a clone of Bug #1784351 +++

How reproducible:

Using a Qemu VM with Gentoo minimal ISO.
VM has 3 drive of 10GB


Steps to Reproduce:
(Line begining with # are commands executed under root)

# /etc/init.d/lvm start
 * Starting lvmetad ...
 
# rc-status
Dynamic Runlevel: needed/wanted
 lvmetad                                                          [  started  ]
 lvm                                                              [  started  ]

# lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT
NAME   SIZE TYPE FSTYPE   MOUNTPOINT
loop0  313M loop squashfs /mnt/livecd
sda     10G disk          
sdb     10G disk          
sdc     10G disk          
sr0    347M rom  iso9660  /mnt/cdrom

# pvcreate /dev/sda /dev/sdb /dev/sdc
  Physical volume "/dev/sda" successfully created.
  Physical volume "/dev/sdb" successfully created.
  Physical volume "/dev/sdc" successfully created.

# vgcreate test /dev/sda /dev/sdb /dev/sdc
  Volume group "test" successfully created

# pvs
  PV         VG   Fmt  Attr PSize   PFree  
  /dev/sda   test lvm2 a--  <10,00g <10,00g
  /dev/sdb   test lvm2 a--  <10,00g <10,00g
  /dev/sdc   test lvm2 a--  <10,00g <10,00g

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree  
  test   3   0   0 wz--n- <29,99g <29,99g

# lvcreate -L8G -n root test
  Logical volume "root" created.

# lvconvert --type raid5 test/root
  Using default stripesize 64,00 KiB.
  Replaced LV type raid5 (same as raid5_ls) with possible type raid1.
  Repeat this command to convert to raid5 after an interim conversion has finished.
Are you sure you want to convert linear LV test/root to raid1 type? [y/n]: y
  Logical volume test/root successfully converted.

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize Cpy%Sync Devices                           Layout    
  root            8.00g 100.00   root_rimage_0(0),root_rimage_1(0) raid,raid1
  [root_rimage_0] 8.00g          /dev/sda(0)                       linear    
  [root_rimage_1] 8.00g          /dev/sdb(1)                       linear    
  [root_rmeta_0]  4.00m          /dev/sda(2048)                    linear    
  [root_rmeta_1]  4.00m          /dev/sdb(0)                       linear    

# lvconvert --type raid5 test/root
  Using default stripesize 64,00 KiB.
  --stripes not allowed for LV test/root when converting from raid1 to raid5.
Are you sure you want to convert raid1 LV test/root to raid5 type? [y/n]: y
  Logical volume test/root successfully converted.

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize Cpy%Sync Devices                           Layout             
  root            8.00g 100.00   root_rimage_0(0),root_rimage_1(0) raid,raid5,raid5_ls
  [root_rimage_0] 8.00g          /dev/sda(0)                       linear             
  [root_rimage_1] 8.00g          /dev/sdb(1)                       linear             
  [root_rmeta_0]  4.00m          /dev/sda(2048)                    linear             
  [root_rmeta_1]  4.00m          /dev/sdb(0)                       linear             

# lvextend --version
  LVM version:     2.02.184(2) (2019-03-22)
  Library version: 1.02.156 (2019-03-22)
  Driver version:  4.39.0
  Configuration:   ./configure --prefix=/usr --build=x86_64-pc-linux-gnu 
  --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info 
  --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib 
  --disable-dependency-tracking --docdir=/usr/share/doc/lvm2-2.02.184-r5 
  --htmldir=/usr/share/doc/lvm2-2.02.184-r5/html --enable-dmfilemapd 
  --enable-dmeventd --enable-cmdlib --enable-applib --enable-fsadm 
  --enable-lvmetad --enable-lvmpolld --with-mirrors=internal 
  --with-snapshots=internal --with-thin=internal --with-cache=internal 
  --with-thin-check=/sbin/thin_check --with-cache-check=/sbin/cache_check 
  --with-thin-dump=/sbin/thin_dump --with-cache-dump=/sbin/cache_dump 
  --with-thin-repair=/sbin/thin_repair --with-cache-repair=/sbin/cache_repair 
  --with-thin-restore=/sbin/thin_restore 
  --with-cache-restore=/sbin/cache_restore --with-clvmd=none 
  --with-cluster=none --enable-readline --disable-selinux --enable-pkgconfig 
  --with-confdir=/etc --exec-prefix= --sbindir=/sbin --with-staticdir=/sbin 
  --libdir=/lib64 --with-usrlibdir=/usr/lib64 --with-default-dm-run-dir=/run 
  --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm 
  --with-default-pid-dir=/run --enable-udev_rules --enable-udev_sync 
  --with-udevdir=/lib/udev/rules.d --disable-lvmlockd-sanlock 
  --disable-udev-systemd-background-jobs --disable-notify-dbus 
  --with-systemdsystemunitdir=/lib/systemd/system CLDFLAGS=-Wl,-O1 
  -Wl,--as-needed

# lvextend -L+1G test/root
Bus error

# dmesg | tail
[ 6582.162403] md/raid1:mdX: active with 1 out of 2 mirrors
[ 6582.229526] mdX: bitmap file is out of date, doing full recovery
[ 6582.231064] md: recovery of RAID array mdX
[ 6623.103487] md: mdX: recovery done.
[ 6648.042134] md/raid:mdX: device dm-2 operational as raid disk 0
[ 6648.042136] md/raid:mdX: device dm-4 operational as raid disk 1
[ 6648.042439] md/raid:mdX: raid level 5 active with 2 out of 2 devices, algorithm 2
[ 6648.064165] device-mapper: raid: raid456 discard support disabled due to discard_zeroes_data uncertainty.
[ 6648.064167] device-mapper: raid: Set dm-raid.devices_handle_discard_safely=Y to override.
[ 6712.237731] traps: lvextend[13881] trap stack segment ip:556f10ddc428 sp:7ffc4d232010 error:0 in lvm[556f10d44000+106000]


Reverting back to RAID1 solve the problem.

# lvconvert --type raid1 test/root
Are you sure you want to convert raid5 LV test/root to raid1 type? [y/n]: y
  Logical volume test/root successfully converted.

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize Cpy%Sync Devices                           Layout    
  root            8.00g 100.00   root_rimage_0(0),root_rimage_1(0) raid,raid1
  [root_rimage_0] 8.00g          /dev/sda(0)                       linear    
  [root_rimage_1] 8.00g          /dev/sdb(1)                       linear    
  [root_rmeta_0]  4.00m          /dev/sda(2048)                    linear    
  [root_rmeta_1]  4.00m          /dev/sdb(0)                       linear    

# lvextend -L+1G test/root
  Extending 2 mirror images.
  Size of logical volume test/root changed from 8.00 GiB (2048 extents) to 9.00 GiB (2304 extents).
  Logical volume test/root successfully resized.

# lvconvert --type raid5 test/root
  Using default stripesize 64.00 KiB.
  --stripes not allowed for LV test/root when converting from raid1 to raid5.
Are you sure you want to convert raid1 LV test/root to raid5 type? [y/n]: y
  Logical volume test/root successfully converted.

Additional info :
Similar problem occurs with fedora 31 on physical computer and LVM 2.03.06(2) (2019-10-23)
Here message is "segmentation fault".

--- Additional comment from Frédéric KIEBER on 2019-12-17 13:33:12 UTC ---

Adding stripes solves also the problem but double the size :(

# lvconvert --stripes 2 test/root

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  test   3   1   0 wz--n- <29.99g 5.96g

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize  Cpy%Sync Devices                                            Layout
  root            16,00g 100,00   root_rimage_0(0),root_rimage_1(0),root_rimage_2(0) raid,raid5,raid5_ls
  [root_rimage_0]  8,00g          /dev/sda(2049)                                     linear
  [root_rimage_0]  8,00g          /dev/sda(0)                                        linear
  [root_rimage_1]  8,00g          /dev/sdb(2049)                                     linear
  [root_rimage_1]  8,00g          /dev/sdb(1)                                        linear
  [root_rimage_2]  8,00g          /dev/sdc(2049)                                     linear
  [root_rimage_2]  8,00g          /dev/sdc(1)                                        linear
  [root_rmeta_0]   4,00m          /dev/sda(2048)                                     linear
  [root_rmeta_1]   4,00m          /dev/sdb(0)                                        linear
  [root_rmeta_2]   4,00m          /dev/sdc(0)                                        linear

# lvextend -L+1G test/root
  Using stripesize of last segment 64.00 KiB
  Size of logical volume test/root changed from 16.00 GiB (4096 extents) to 17.00 GiB (4352 extents).
  Logical volume test/root successfully resized.

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree 
  test   3   1   0 wz--n- <29.99g <4.48g

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize  Cpy%Sync Devices                                            Layout
  root            17,00g 100,00   root_rimage_0(0),root_rimage_1(0),root_rimage_2(0) raid,raid5,raid5_ls
  [root_rimage_0]  8,50g          /dev/sda(2049)                                     linear
  [root_rimage_0]  8,50g          /dev/sda(0)                                        linear
  [root_rimage_0]  8,50g          /dev/sda(2050)                                     linear
  [root_rimage_1]  8,50g          /dev/sdb(2049)                                     linear
  [root_rimage_1]  8,50g          /dev/sdb(1)                                        linear
  [root_rimage_1]  8,50g          /dev/sdb(2050)                                     linear
  [root_rimage_2]  8,50g          /dev/sdc(2049)                                     linear
  [root_rimage_2]  8,50g          /dev/sdc(1)                                        linear
  [root_rimage_2]  8,50g          /dev/sdc(2050)                                     linear
  [root_rmeta_0]   4,00m          /dev/sda(2048)                                     linear
  [root_rmeta_1]   4,00m          /dev/sdb(0)                                        linear
  [root_rmeta_2]   4,00m          /dev/sdc(0)                                        linear

--- Additional comment from Heinz Mauelshagen on 2020-06-24 12:06:20 UTC ---

Fixed by rejecting size change requests on 2-legged raid5* and raid4:

master commit id: 2cf0f90780bed64cb4062eb6dfa714ed03eecfb7 and 04bba5ea421b02275197bfb16b4d1bbf8879b240
stable 2.02 commit id: d17780c6b85a0f136e0ed395d5722d82bd8c7464 and e7e2288ff4ac34d825dd13dd45b0418723a7da84

Comment 1 Corey Marthaler 2020-06-26 14:49:28 UTC
Cloning to track this in rhel8.3. 

[root@host-087 ~]# lvconvert --type raid5 test/root
  Using default stripesize 64.00 KiB.
  --stripes not allowed for LV test/root when converting from raid1 to raid5.
Are you sure you want to convert raid1 LV test/root to raid5 type? [y/n]: y
  Logical volume test/root successfully converted.

[root@host-087 ~]# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize   Cpy%Sync Devices                           Layout             
  root              8.00g 100.00   root_rimage_0(0),root_rimage_1(0) raid,raid5,raid5_ls
  [root_rimage_0]   8.00g          /dev/sda(0)                       linear             
  [root_rimage_1]   8.00g          /dev/sdb(1)                       linear             
  [root_rmeta_0]    4.00m          /dev/sda(2048)                    linear             
  [root_rmeta_1]    4.00m          /dev/sdb(0)                       linear             

[root@host-087 ~]# lvextend -L+1G test/root
Segmentation fault (core dumped)

Jun 26 09:27:49 host-087 systemd[1]: Started Process Core Dump (PID 180473/UID 0).
Jun 26 09:27:50 host-087 systemd-coredump[180474]: Process 180472 (lvextend) of user 0 dumped core.#012#012Stack trace of thread 180472:#012#0  0x0000563f675fddd8 lv_add_segment (lvm)#012#1  0x0000563f676061b7 _lv_extend_layered_lv (lvm)#012#2  0x0000563f67606983 lv_extend (lvm)#012#3  0x0000563f67607427 _lvresize_volume (lvm)#012#4  0x0000563f67607c92 lv_resize (lvm)#012#5  0x0000563f67587441 _lvresize_single (lvm)#012#6  0x0000563f675a1bea process_each_vg (lvm)#012#7  0x0000563f67587815 lvresize (lvm)#012#8  0x0000563f67584d5d lvm_run_command (lvm)#012#9  0x0000563f67586083 lvm2_main (lvm)#012#10 0x00007febc32aa7b3 __libc_start_main (libc.so.6)#012#11 0x0000563f675613ae _start (lvm)


4.18.0-219.el8.x86_64

kernel-4.18.0-219.el8    BUILT: Tue Jun 23 15:32:02 CDT 2020
lvm2-2.03.09-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
lvm2-libs-2.03.09-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
lvm2-lockd-2.03.09-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
device-mapper-1.02.171-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
device-mapper-libs-1.02.171-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
device-mapper-event-1.02.171-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
device-mapper-event-libs-1.02.171-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020

Comment 5 Corey Marthaler 2020-07-01 18:15:33 UTC
Fix verified in the latest rpms.

kernel-4.18.0-211.el8    BUILT: Thu Jun  4 03:33:39 CDT 2020
lvm2-2.03.09-3.el8    BUILT: Mon Jun 29 13:50:23 CDT 2020
lvm2-libs-2.03.09-3.el8    BUILT: Mon Jun 29 13:50:23 CDT 2020
lvm2-dbusd-2.03.09-3.el8    BUILT: Mon Jun 29 13:53:38 CDT 2020
lvm2-lockd-2.03.09-3.el8    BUILT: Mon Jun 29 13:50:23 CDT 2020
boom-boot-1.2-1.el8    BUILT: Sun Jun  7 07:20:03 CDT 2020
device-mapper-1.02.171-3.el8    BUILT: Mon Jun 29 13:50:23 CDT 2020
device-mapper-libs-1.02.171-3.el8    BUILT: Mon Jun 29 13:50:23 CDT 2020
device-mapper-event-1.02.171-3.el8    BUILT: Mon Jun 29 13:50:23 CDT 2020
device-mapper-event-libs-1.02.171-3.el8    BUILT: Mon Jun 29 13:50:23 CDT 2020


[root@hayes-02 ~]# lvcreate -L8G -n root test
  Logical volume "root" created.

[root@hayes-02 ~]# lvconvert --type raid5 test/root
  Using default stripesize 64.00 KiB.
  Replaced LV type raid5 (same as raid5_ls) with possible type raid1.
  Repeat this command to convert to raid5 after an interim conversion has finished.
Are you sure you want to convert linear LV test/root to raid1 type? [y/n]: y
  Logical volume test/root successfully converted.

[root@hayes-02 ~]# lvs -a -o +devices
  LV              VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                          
  root            test rwi-a-r--- 8.00g                                    20.65            root_rimage_0(0),root_rimage_1(0)
  [root_rimage_0] test iwi-aor--- 8.00g                                                     /dev/sdb1(0)                     
  [root_rimage_1] test Iwi-aor--- 8.00g                                                     /dev/sdc1(1)                     
  [root_rmeta_0]  test ewi-aor--- 4.00m                                                     /dev/sdb1(2048)                  
  [root_rmeta_1]  test ewi-aor--- 4.00m                                                     /dev/sdc1(0)                     

[root@hayes-02 ~]# lvconvert --type raid5 test/root
  Using default stripesize 64.00 KiB.
  --stripes not allowed for LV test/root when converting from raid1 to raid5.
Are you sure you want to convert raid1 LV test/root to raid5 type? [y/n]: y
  Logical volume test/root successfully converted.

[root@hayes-02 ~]# lvextend -L+1G test/root
  Cannot resize raid5 LV test/root. Convert to more stripes first.

Comment 8 errata-xmlrpc 2020-11-04 02:00:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4546