Bug 1643651
Summary: | lvm can overwrite extents beyond metadata area | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | David Teigland <teigland> | ||||||
Component: | lvm2 | Assignee: | David Teigland <teigland> | ||||||
lvm2 sub component: | Command-line tools | QA Contact: | cluster-qe <cluster-qe> | ||||||
Status: | CLOSED ERRATA | Docs Contact: | Marek Suchánek <msuchane> | ||||||
Severity: | urgent | ||||||||
Priority: | urgent | CC: | agk, bruce.howells, bugzilla, carl, cmarthal, ddumas, dominik.mierzejewski, ebenahar, fbrychta, fgarciad, gveitmic, heinzm, jbrassow, jdeenada, jpittman, jpriddy, lkuprova, loberman, mcsontos, mkalinin, msnitzer, pasik, pdwyer, prajnoha, prockai, rhandlin, salmy, sreber, teigland, therman, thornber, xzhou, zkabelac | ||||||
Version: | 7.6 | Keywords: | ZStream | ||||||
Target Milestone: | rc | ||||||||
Target Release: | --- | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | lvm2-2.02.184-1.el7 | Doc Type: | Bug Fix | ||||||
Doc Text: |
.LVM no longer causes data corruption in the first 128kB of allocatable space of a physical volume
Previously, a bug in the I/O layer of LVM might have caused data corruption in rare cases. The bug could manifest only when the following conditions were true at the same time:
* A physical volume (PV) was created with a non-default alignment. The default is 1MB.
* An LVM command was modifying metadata at the tail end of the metadata region of the PV.
* A user or a file system was modifying the same bytes (racing).
No cases of the data corruption have been reported.
With this update, the problem has been fixed, and LVM can no longer cause data corruption under these conditions.
|
Story Points: | --- | ||||||
Clone Of: | |||||||||
: | 1644199 1644206 (view as bug list) | Environment: | |||||||
Last Closed: | 2019-08-06 13:10:41 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1644199, 1644206, 1644684 | ||||||||
Attachments: |
|
Description
David Teigland
2018-10-26 21:35:15 UTC
Created attachment 1497911 [details]
tested patch
This is a hacky patch that I have tested and it fixes the reproducible corruption I was setting in a test.
This patch needs a bit of cleanup, I'm attaching it as is because it's the one I've tested.
Created attachment 1497912 [details]
cleanup
This is some incremental cleanup to the fix that should be tested.
This patch I'm testing further which includes fixed cleanup: https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/dev-dct-last-byte-1 (for some reason buildbot won't build this, not sure why) The reproducible LV corruption I was seeing was from the following: setup and start sanlock and lvmlockd vgcreate --shared --metadatasize 1m foo /dev/sdg (Note that this creates an internal "lvmlock" LV that sanlock uses to store leases.) lvcreate 500 inactive LVs in foo vgremove foo During the vgremove step, sanlock notices that its updates to the internal "lvmlock" LV are periodically lost. It's because when vgremove writes metadata at the end of the metadata area, it also clobbers PEs that were allocated to the lvmlock LV. (sanlock reads/writes blocks to the lvmlock LV and notices if data changes out from under it.) It should be straight forward to reproduce this same issue without lvmlockd and sanlock. Create an ordinary VG, create an initial small LV (that uses the first PEs in the VG), start a script or program that reads/writes data to that LV and verifies that what it wrote comes back again. Then start creating 500 other LVs in the VG, and removing those 500 LVs. This causes the LV metadata to grow large and wrap around the end of the metadata area. When lvm writes to the end of the metadata area, it will clobber data that the test program wrote and the test program should eventually notice that its last write is missing. vg_validate() was designed as an independent in-tree check to guarantee only valid metadata can ever hit disk. You might consider adding some similar hook called and implemented independently of the faulty new code here to try to make this class of bug impossible. Possible ideas for mitigation (untested) - Revert to a release before the bad commit - Don't change any VG metadata (if lvm.conf locking_type is 1 change to 4 to enforce) - not appropriate if you have anything that makes metadata changes automatically (eg dmeventd for LV extension) - Identify which LVs contain the PEs that might be affected and deactivate those LVs before changing VG metadata - Use pvmove to move the LEs at risk elsewhere, then lvcreate to allocate "do not use" LVs on those PEs, taking them out of use. I expect this is the fix we will use (waiting for buildbot to finish testing it): https://sourceware.org/git/?p=lvm2.git;a=commit;h=dabbed4f6af9d37d56671dd68a048b2462cd1da2 Pushed out final fixes stable branch for 7.6 https://sourceware.org/git/?p=lvm2.git;a=commit;h=ab27d5dc2a5c3bf23ab8fed438f1542015dc723d master branch https://sourceware.org/git/?p=lvm2.git;a=commit;h=aecf542126640faa17c240afbb1ea61f11355c39 "LVM might cause data corruption in the first 128kB of the disk" is not technically correct. It is the first 128kB of the first lvm physical extent of each PV in the VG. lvm can potentially use these physical extents anywhere in any LV. LVM might cause data corruption in the first 128kB of the disk A bug in the I/O layer of LVM causes LVM to read and write back the first 128kB of data that immediately follows the LVM metadata on the disk. If another program or the file system is modifying these blocks when you use an LVM command, changes might be lost. As a consequence, this might lead to data corruption in rare cases. To work around this problem, do not use LVM commands that change volume group (VG) metadata, such as "lvcreate" or "lvextend", while logical volumes (LVs) in the VG are in use. 1) s/128k of the disk/128k of allocatable space of a PV/ 2) s/sometimes// 3) s/follow/follows/ 4) Should there be any word on when this issue will be resolved? Users will want to know if this is something that will be fixed soon. 5) s/"do not use LVM commands"/"avoid using LVM commands"/ With the likelihood so low of problems, users do have some discretion of performing these commands; but they should try to avoid it. Thanks a lot, David, Jon, and Marek! (In reply to Jonathan Earl Brassow from comment #17) > LVM might cause data corruption in the first 128kB of the disk > > A bug in the I/O layer of LVM causes LVM to read and write back the first > 128kB of data that immediately follows the LVM metadata on the disk. If > another program or the file system is modifying these blocks when you use an > LVM command, changes might be lost. As a consequence, this might lead to > data corruption in rare cases. > > To work around this problem, do not use LVM commands that change volume > group (VG) metadata, such as "lvcreate" or "lvextend", while logical volumes > (LVs) in the VG are in use. > > 1) s/128k of the disk/128k of allocatable space of a PV/ > > 2) s/sometimes// > > 3) s/follow/follows/ (In reply to Jonathan Earl Brassow from comment #18) > 5) s/"do not use LVM commands"/"avoid using LVM commands"/ > > With the likelihood so low of problems, users do have some discretion of > performing these commands; but they should try to avoid it. All above fixed. > > 4) Should there be any word on when this issue will be resolved? Users will > want to know if this is something that will be fixed soon. We cannot make any promises regarding future fixes in docs, sorry. I am now going to use this version and republish the Release Notes with it. If you have any further suggestions for improvements, we will be republishing the book quite frequently in the upcoming days (there is a number of last-minute release notes, as always). Thanks again! By default in RHEL7, lvm seems to consistently create VGs with the first physical extent (PE) at offset 1MB. When a PE begins at a multiple of 128KB (like 1MB) it is immune to this bug. To check the PE offset: $ vgs -o name,pe_start --units k vg VG 1st PE vg 1024.00k The config setting "default_data_alignment" is 1 (MB) by default, and this leads to the common pe_start value of 1MB. If this setting is turned off (0), then pe_start values may not be aligned with 128KB, and potentially affected by this bug. We have had the fix since yesterday. Marian, is there anything else you need before creating upstream releases and a 7.6 build? Hi David, I'm from RHV storage QE. We would like to test RHV sanity with the fix. Can you please provide us the lvm build that contains the fix? Thanks Elad, can you use lvm2-2.02.180-10.el7_6.2 ? https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=791193 Email about this bug to the upstream lvm mailing list: https://www.redhat.com/archives/linux-lvm/2018-October/msg00054.html There's nothing relevant to say about how it's fixed. I think the doc text may be stating the problem a little too severely: 1. This problem could only arise if an LVM PV was created with a non-default alignment (the default is 1MB). This is uncommon. 2. This problem could only arise while an LVM command is running and modifying metadata. This is usually infrequent. 3. The problem could only arise when the LVM command is modifying metadata at the tail end of the metadata region. This is usually infrequent. 4. The problem could only arise if the previous three items are all true at the same moment that a user or fs happens to be modifying the same bytes (racing). 5. Data loss could occur if the race in 4 went one way. That is probably too much detail for the doc text, so perhaps saying that it's "rare" is sufficient. Thanks for the additional details. I've rewritten the release note to be more specific and not to give the impression that there were reported cases of the data corruption. Let's remove the "found through code inspection" phrase, because it's not entirely accurate. I discovered this through an unusal test that I was running that revealed this issue. (In reply to David Teigland from comment #42) > Let's remove the "found through code inspection" phrase, because it's not > entirely accurate. I discovered this through an unusal test that I was > running that revealed this issue. Removed. Marking verified with the latest rpms. 3.10.0-1057.el7.x86_64 lvm2-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-libs-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-cluster-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-lockd-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-event-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-event-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-persistent-data-0.8.5-1.el7 BUILT: Mon Jun 10 03:58:20 CDT 2019 sanlock-3.7.3-1.el7 BUILT: Tue May 21 10:44:00 CDT 2019 sanlock-lib-3.7.3-1.el7 BUILT: Tue May 21 10:44:00 CDT 2019 [root@hayes-02 ~]# vgcreate --config devices/default_data_alignment=0 --metadatasize 520k gg /dev/sdg1 Physical volume "/dev/sdg1" successfully created. Volume group "gg" successfully created [root@hayes-02 ~]# vgs -o+pe_start gg VG #PV #LV #SN Attr VSize VFree 1st PE gg 1 0 0 wz--n- <931.25g <931.25g 576.00k [root@hayes-02 ~]# lvcreate -l1 -n test gg Logical volume "test" created. [root@hayes-02 ~]# sanlock daemon -w 0 [root@hayes-02 ~]# sanlock client init -s LS:0:/dev/gg/test:0 -o 1 init init done 0 [root@hayes-02 ~]# sanlock client add_lockspace -s LS:1:/dev/gg/test:0 -o 1 add_lockspace_timeout 1 add_lockspace_timeout done 0 [root@hayes-02 ~]# tail -f /var/log/sanlock.log 2019-07-02 16:07:41 81985 [9327]: sanlock daemon started 3.7.3 host 7856c016-0652-4b98-9b9b-b472c5895c90.hayes-02.l 2019-07-02 16:07:55 81999 [9330]: s1 lockspace LS:1:/dev/gg/test:0 2019-07-02 16:07:58 82002 [9327]: s1 host 1 1 81999 7856c016-0652-4b98-9b9b-b472c5895c90.hayes-02.l [root@hayes-02 ~]# for i in `seq 1 1000`; do lvcreate -an -l1 -n lv$i gg; done [...] VG gg metadata on /dev/sdg1 (292777 bytes) too large for circular buffer (585216 bytes with 292471 used) Failed to write VG gg. VG gg metadata on /dev/sdg1 (292777 bytes) too large for circular buffer (585216 bytes with 292471 used) Failed to write VG gg. VG gg metadata on /dev/sdg1 (292778 bytes) too large for circular buffer (585216 bytes with 292471 used) Failed to write VG gg. [root@hayes-02 ~]# lvremove -f gg [...] Logical volume "lv948" successfully removed Logical volume "lv949" successfully removed Logical volume "lv950" successfully removed Logical volume "lv951" successfully removed Logical volume "lv952" successfully removed Logical volume "lv953" successfully removed ## No reported corruption 2019-07-02 16:12:19 82263 [9337]: s1 delta_renew long write time 1 sec 2019-07-02 16:19:08 82672 [9337]: s1 delta_renew long write time 1 sec Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2253 |