Bug 2055394
| Summary: | Creating an LVM RAID1 volume using different-sized PVs messes up the mirror split | ||
|---|---|---|---|
| Product: | [Community] LVM and device-mapper | Reporter: | Henriël <henriel> |
| Component: | lvm2 | Assignee: | LVM Team <lvm-team> |
| lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> |
| Status: | ASSIGNED --- | Docs Contact: | |
| Severity: | high | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac |
| Version: | unspecified | Keywords: | Reopened |
| Target Milestone: | --- | Flags: | pm-rhel:
lvm-technical-solution?
pm-rhel: lvm-test-coverage? |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-02-17 13:48:43 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Henriël
2022-02-16 20:11:22 UTC
The output of pvs --units k: PV VG Fmt Attr PSize PFree /dev/sdb vg-datastore lvm2 a-- 488382464.00k 0k /dev/sdc vg-datastore lvm2 a-- 244195328.00k 0k /dev/sdd vg-datastore lvm2 a-- 244195328.00k 12288.00k /dev/sde vg-datastore lvm2 a-- 976760832.00k 0k FWIW: as you got a new VG, rmeta devices will be allocated on extent 0 on distinct PVs. This is an allocation bug fixed in the newer lvm version as of below seconding test. Please update your lvm version and retry. In case the allocation error remains, please reopen and add comment. # lvm version LVM version: 2.03.11(2) (2021-01-08) Library version: 1.02.175 (2021-01-08) Driver version: 4.45.0 Configuration: ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --w ith-default-dm-run-dir=/run --with-default-run-dir=/run/lvm --with-default-pid-dir=/run --with-default-locking-dir=/run/lock/lvm --with-usrlibdir=/usr/lib64 --enable-fsadm --enable-write_install --with-user= --wi th-group= --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --enable-pkgconfig --enable-cmdlib --enable-dmeventd --enable-blkid_wiping --disable-readline --enable-editline --with-cluster=internal -- enable-cmirrord --with-udevdir=/usr/lib/udev/rules.d --enable-udev_sync --with-thin=internal --with-cache=internal --enable-lvmpolld --enable-lvmlockd-dlm --enable-lvmlockd-dlmcontrol --enable-lvmlockd-sanlock -- enable-dbus-service --enable-notify-dbus --enable-dmfilemapd --with-writecache=internal --with-vdo=internal --with-vdo-format=/usr/bin/vdoformat --with-integrity=internal --disable-silent-rules # pvcreate /dev/sd[a-d] Physical volume "/dev/sda" successfully created. Physical volume "/dev/sdb" successfully created. Physical volume "/dev/sdc" successfully created. Physical volume "/dev/sdd" successfully created. # vgcreate t /dev/sd[a-d] Volume group "t" successfully created # vgs VG #PV #LV #SN Attr VSize VFree t 4 0 0 wz--n- <1.98t <1.98t # lvcreate -ndatastore -m1 --nosync -l100%FREE t WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Logical volume "datastore" created. # pvs PV VG Fmt Attr PSize PFree /dev/sda t lvm2 a-- <500.00g 0 /dev/sdb t lvm2 a-- <250.00g 0 /dev/sdc t lvm2 a-- <250.00g 0 /dev/sdd t lvm2 a-- <1024.00g <24.01g # lvs -ao+devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices datastore t Rwi-a-r--- 999.98g 100.00 datastore_rimage_0(0),datastore_rimage_1(0) [datastore_rimage_0] t iwi-aor--- 999.98g /dev/sdd(1) <-- raid1 leg 1 [datastore_rimage_1] t iwi-aor--- 999.98g /dev/sda(1) <-+ [datastore_rimage_1] t iwi-aor--- 999.98g /dev/sdb(0) <--- raid1 leg 2 [datastore_rimage_1] t iwi-aor--- 999.98g /dev/sdc(0) <-+ [datastore_rmeta_0] t ewi-aor--- 4.00m /dev/sdd(0) [datastore_rmeta_1] t ewi-aor--- 4.00m /dev/sda(0) It's actually reproducible on upstream lvm2 (as of today) with sizes from comment 1. I shuffled sizes around a bit and was able to find a different tupple causing bogus allocation :-( |