Bug 2181573 - lvcreate happily creates RAID5 volumes with reduced redundancy
Summary: lvcreate happily creates RAID5 volumes with reduced redundancy
Keywords:
Status: NEW
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: lvm2
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: LVM Team
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-24 15:27 UTC by Marius Vollmer
Modified: 2023-08-10 15:41 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: ---
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
pm-rhel: lvm-technical-solution?
pm-rhel: lvm-test-coverage?


Attachments (Terms of Use)

Description Marius Vollmer 2023-03-24 15:27:50 UTC
Description of problem:

A logical volume of type RAID5 created with lvcreate might have less redundancy than expected because one physical volume is unnecessarily used in two stripes.

Version-Release number of selected component (if applicable):
lvm2-2.03.11-9.fc37.x86_64

How reproducible:
Always

Steps to Reproduce:
- Prepare four block devices that can be yanked. I have used "targetcli".

- Create a volume group out of all four.
# vgcreate vgroup0 /dev/sdb /dev/sdc /dev/sdd /dev/sde

- Reduce the available space on the first and last PV a bit.
# lvcreate -n lvol0 vgroup0 -L100M /dev/sdb
# lvcreate -n lvol1 vgroup0 -L100M /dev/sde

- Create a logical volume of maxmimum size and type RAID5 on all four PVs.
# lvcreate -n lvol2 vgroup0 -l100%PVS --type raid5

Actual results:

The lvol2 volume is allocated in such a way that /dev/sdc is used for two stripes, "0" and "1". (The same is true for /dev/sdd, with stripes "1" and "2".)

# lvs -a -o name,devices
  LV               Devices                                              
  lvol0            /dev/sdb(0)                                          
  lvol1            /dev/sde(0)                                          
  lvol2            lvol2_rimage_0(0),lvol2_rimage_1(0),lvol2_rimage_2(0)
  [lvol2_rimage_0] /dev/sdc(1)                                          
  [lvol2_rimage_0] /dev/sde(25)                                         
  [lvol2_rimage_1] /dev/sdd(1)                                          
  [lvol2_rimage_1] /dev/sdc(99)                                         
  [lvol2_rimage_2] /dev/sdb(26)                                         
  [lvol2_rimage_2] /dev/sdd(99)                                         
  [lvol2_rmeta_0]  /dev/sdc(0)                                          
  [lvol2_rmeta_1]  /dev/sdd(0)                                          
  [lvol2_rmeta_2]  /dev/sdb(25)                                         

Yanking /dev/sdd will kill both stripe "1" and "2" and make the volume unrecoverable.

# echo 1 > /sys/block/sdd/device/delete
# lvconvert --repair vgroup0/lvol2
  WARNING: Couldn't find device with uuid qsBLYv-eEwq-ZxGJ-bTU1-XmU5-W6Tx-LrS7XD.
  WARNING: VG vgroup0 is missing PV qsBLYv-eEwq-ZxGJ-bTU1-XmU5-W6Tx-LrS7XD (last written to /dev/sdd).
  WARNING: Couldn't find device with uuid qsBLYv-eEwq-ZxGJ-bTU1-XmU5-W6Tx-LrS7XD.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Unable to replace more than 1 PVs from (raid5) vgroup0/lvol2.
  Failed to replace faulty devices in vgroup0/lvol2.

Expected results:

A RAID5 volume should be able to survive the loss of one PV and lvconvert --repair should be able to bring it back to full redundancy.

lvcreate and lvextend should not allocate space of a given PV to more than one stripe.

Comment 1 Marius Vollmer 2023-03-24 15:32:06 UTC
Bug 2055394 seems related, but seems to have stalled.


Note You need to log in before you can comment on or make changes to this bug.