Bug 1784351 - Bus error in lvextend for RAID5 converted from RAID1
Summary: Bus error in lvextend for RAID5 converted from RAID1
Keywords:
Status: MODIFIED
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: lvm2
Version: 2.02.184
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks: 1851451
TreeView+ depends on / blocked
 
Reported: 2019-12-17 10:12 UTC by Frédéric KIEBER
Modified: 2023-08-10 15:40 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.03.10
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1851451 (view as bug list)
Environment:
Last Closed:
Embargoed:
pm-rhel: lvm-technical-solution?
pm-rhel: lvm-test-coverage?


Attachments (Terms of Use)

Description Frédéric KIEBER 2019-12-17 10:12:17 UTC
How reproducible:

Using a Qemu VM with Gentoo minimal ISO.
VM has 3 drive of 10GB


Steps to Reproduce:
(Line begining with # are commands executed under root)

# /etc/init.d/lvm start
 * Starting lvmetad ...
 
# rc-status
Dynamic Runlevel: needed/wanted
 lvmetad                                                          [  started  ]
 lvm                                                              [  started  ]

# lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT
NAME   SIZE TYPE FSTYPE   MOUNTPOINT
loop0  313M loop squashfs /mnt/livecd
sda     10G disk          
sdb     10G disk          
sdc     10G disk          
sr0    347M rom  iso9660  /mnt/cdrom

# pvcreate /dev/sda /dev/sdb /dev/sdc
  Physical volume "/dev/sda" successfully created.
  Physical volume "/dev/sdb" successfully created.
  Physical volume "/dev/sdc" successfully created.

# vgcreate test /dev/sda /dev/sdb /dev/sdc
  Volume group "test" successfully created

# pvs
  PV         VG   Fmt  Attr PSize   PFree  
  /dev/sda   test lvm2 a--  <10,00g <10,00g
  /dev/sdb   test lvm2 a--  <10,00g <10,00g
  /dev/sdc   test lvm2 a--  <10,00g <10,00g

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree  
  test   3   0   0 wz--n- <29,99g <29,99g

# lvcreate -L8G -n root test
  Logical volume "root" created.

# lvconvert --type raid5 test/root
  Using default stripesize 64,00 KiB.
  Replaced LV type raid5 (same as raid5_ls) with possible type raid1.
  Repeat this command to convert to raid5 after an interim conversion has finished.
Are you sure you want to convert linear LV test/root to raid1 type? [y/n]: y
  Logical volume test/root successfully converted.

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize Cpy%Sync Devices                           Layout    
  root            8.00g 100.00   root_rimage_0(0),root_rimage_1(0) raid,raid1
  [root_rimage_0] 8.00g          /dev/sda(0)                       linear    
  [root_rimage_1] 8.00g          /dev/sdb(1)                       linear    
  [root_rmeta_0]  4.00m          /dev/sda(2048)                    linear    
  [root_rmeta_1]  4.00m          /dev/sdb(0)                       linear    

# lvconvert --type raid5 test/root
  Using default stripesize 64,00 KiB.
  --stripes not allowed for LV test/root when converting from raid1 to raid5.
Are you sure you want to convert raid1 LV test/root to raid5 type? [y/n]: y
  Logical volume test/root successfully converted.

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize Cpy%Sync Devices                           Layout             
  root            8.00g 100.00   root_rimage_0(0),root_rimage_1(0) raid,raid5,raid5_ls
  [root_rimage_0] 8.00g          /dev/sda(0)                       linear             
  [root_rimage_1] 8.00g          /dev/sdb(1)                       linear             
  [root_rmeta_0]  4.00m          /dev/sda(2048)                    linear             
  [root_rmeta_1]  4.00m          /dev/sdb(0)                       linear             

# lvextend --version
  LVM version:     2.02.184(2) (2019-03-22)
  Library version: 1.02.156 (2019-03-22)
  Driver version:  4.39.0
  Configuration:   ./configure --prefix=/usr --build=x86_64-pc-linux-gnu 
  --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info 
  --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib 
  --disable-dependency-tracking --docdir=/usr/share/doc/lvm2-2.02.184-r5 
  --htmldir=/usr/share/doc/lvm2-2.02.184-r5/html --enable-dmfilemapd 
  --enable-dmeventd --enable-cmdlib --enable-applib --enable-fsadm 
  --enable-lvmetad --enable-lvmpolld --with-mirrors=internal 
  --with-snapshots=internal --with-thin=internal --with-cache=internal 
  --with-thin-check=/sbin/thin_check --with-cache-check=/sbin/cache_check 
  --with-thin-dump=/sbin/thin_dump --with-cache-dump=/sbin/cache_dump 
  --with-thin-repair=/sbin/thin_repair --with-cache-repair=/sbin/cache_repair 
  --with-thin-restore=/sbin/thin_restore 
  --with-cache-restore=/sbin/cache_restore --with-clvmd=none 
  --with-cluster=none --enable-readline --disable-selinux --enable-pkgconfig 
  --with-confdir=/etc --exec-prefix= --sbindir=/sbin --with-staticdir=/sbin 
  --libdir=/lib64 --with-usrlibdir=/usr/lib64 --with-default-dm-run-dir=/run 
  --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm 
  --with-default-pid-dir=/run --enable-udev_rules --enable-udev_sync 
  --with-udevdir=/lib/udev/rules.d --disable-lvmlockd-sanlock 
  --disable-udev-systemd-background-jobs --disable-notify-dbus 
  --with-systemdsystemunitdir=/lib/systemd/system CLDFLAGS=-Wl,-O1 
  -Wl,--as-needed

# lvextend -L+1G test/root
Bus error

# dmesg | tail
[ 6582.162403] md/raid1:mdX: active with 1 out of 2 mirrors
[ 6582.229526] mdX: bitmap file is out of date, doing full recovery
[ 6582.231064] md: recovery of RAID array mdX
[ 6623.103487] md: mdX: recovery done.
[ 6648.042134] md/raid:mdX: device dm-2 operational as raid disk 0
[ 6648.042136] md/raid:mdX: device dm-4 operational as raid disk 1
[ 6648.042439] md/raid:mdX: raid level 5 active with 2 out of 2 devices, algorithm 2
[ 6648.064165] device-mapper: raid: raid456 discard support disabled due to discard_zeroes_data uncertainty.
[ 6648.064167] device-mapper: raid: Set dm-raid.devices_handle_discard_safely=Y to override.
[ 6712.237731] traps: lvextend[13881] trap stack segment ip:556f10ddc428 sp:7ffc4d232010 error:0 in lvm[556f10d44000+106000]


Reverting back to RAID1 solve the problem.

# lvconvert --type raid1 test/root
Are you sure you want to convert raid5 LV test/root to raid1 type? [y/n]: y
  Logical volume test/root successfully converted.

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize Cpy%Sync Devices                           Layout    
  root            8.00g 100.00   root_rimage_0(0),root_rimage_1(0) raid,raid1
  [root_rimage_0] 8.00g          /dev/sda(0)                       linear    
  [root_rimage_1] 8.00g          /dev/sdb(1)                       linear    
  [root_rmeta_0]  4.00m          /dev/sda(2048)                    linear    
  [root_rmeta_1]  4.00m          /dev/sdb(0)                       linear    

# lvextend -L+1G test/root
  Extending 2 mirror images.
  Size of logical volume test/root changed from 8.00 GiB (2048 extents) to 9.00 GiB (2304 extents).
  Logical volume test/root successfully resized.

# lvconvert --type raid5 test/root
  Using default stripesize 64.00 KiB.
  --stripes not allowed for LV test/root when converting from raid1 to raid5.
Are you sure you want to convert raid1 LV test/root to raid5 type? [y/n]: y
  Logical volume test/root successfully converted.

Additional info :
Similar problem occurs with fedora 31 on physical computer and LVM 2.03.06(2) (2019-10-23)
Here message is "segmentation fault".

Comment 1 Frédéric KIEBER 2019-12-17 13:33:12 UTC
Adding stripes solves also the problem but double the size :(

# lvconvert --stripes 2 test/root

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  test   3   1   0 wz--n- <29.99g 5.96g

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize  Cpy%Sync Devices                                            Layout
  root            16,00g 100,00   root_rimage_0(0),root_rimage_1(0),root_rimage_2(0) raid,raid5,raid5_ls
  [root_rimage_0]  8,00g          /dev/sda(2049)                                     linear
  [root_rimage_0]  8,00g          /dev/sda(0)                                        linear
  [root_rimage_1]  8,00g          /dev/sdb(2049)                                     linear
  [root_rimage_1]  8,00g          /dev/sdb(1)                                        linear
  [root_rimage_2]  8,00g          /dev/sdc(2049)                                     linear
  [root_rimage_2]  8,00g          /dev/sdc(1)                                        linear
  [root_rmeta_0]   4,00m          /dev/sda(2048)                                     linear
  [root_rmeta_1]   4,00m          /dev/sdb(0)                                        linear
  [root_rmeta_2]   4,00m          /dev/sdc(0)                                        linear

# lvextend -L+1G test/root
  Using stripesize of last segment 64.00 KiB
  Size of logical volume test/root changed from 16.00 GiB (4096 extents) to 17.00 GiB (4352 extents).
  Logical volume test/root successfully resized.

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree 
  test   3   1   0 wz--n- <29.99g <4.48g

# lvs -a -o name,lv_size,copy_percent,devices,lv_layout
  LV              LSize  Cpy%Sync Devices                                            Layout
  root            17,00g 100,00   root_rimage_0(0),root_rimage_1(0),root_rimage_2(0) raid,raid5,raid5_ls
  [root_rimage_0]  8,50g          /dev/sda(2049)                                     linear
  [root_rimage_0]  8,50g          /dev/sda(0)                                        linear
  [root_rimage_0]  8,50g          /dev/sda(2050)                                     linear
  [root_rimage_1]  8,50g          /dev/sdb(2049)                                     linear
  [root_rimage_1]  8,50g          /dev/sdb(1)                                        linear
  [root_rimage_1]  8,50g          /dev/sdb(2050)                                     linear
  [root_rimage_2]  8,50g          /dev/sdc(2049)                                     linear
  [root_rimage_2]  8,50g          /dev/sdc(1)                                        linear
  [root_rimage_2]  8,50g          /dev/sdc(2050)                                     linear
  [root_rmeta_0]   4,00m          /dev/sda(2048)                                     linear
  [root_rmeta_1]   4,00m          /dev/sdb(0)                                        linear
  [root_rmeta_2]   4,00m          /dev/sdc(0)                                        linear

Comment 2 Heinz Mauelshagen 2020-06-24 12:06:20 UTC
Fixed by rejecting size change requests on 2-legged raid5* and raid4:

master commit id: 2cf0f90780bed64cb4062eb6dfa714ed03eecfb7 and 04bba5ea421b02275197bfb16b4d1bbf8879b240
stable 2.02 commit id: d17780c6b85a0f136e0ed395d5722d82bd8c7464 and e7e2288ff4ac34d825dd13dd45b0418723a7da84

Comment 3 Marian Csontos 2020-12-16 15:12:35 UTC
Not fixed yet in a released 2.02.* though. Keeping this in MODIFIED until that's fixed.


Note You need to log in before you can comment on or make changes to this bug.