| Summary: | Volumes for thin provisioning pools are not monitored on activation | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Zdenek Kabelac <zkabelac> | |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> | |
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> | |
| Severity: | unspecified | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 6.3 | CC: | agk, cmarthal, dwysocha, heinzm, jbrassow, mbroz, msnitzer, nperic, prajnoha, prockai, thornber, zkabelac | |
| Target Milestone: | rc | |||
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | lvm2-2.02.95-3.el6 | Doc Type: | Bug Fix | |
| Doc Text: |
No documentation needed.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 870248 (view as bug list) | Environment: | ||
| Last Closed: | 2012-06-20 15:02:58 UTC | Type: | --- | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Bug Depends On: | ||||
| Bug Blocks: | 870248 | |||
|
Description
Zdenek Kabelac
2012-03-21 09:17:50 UTC
Patches to address this problem have been committed upstream: https://www.redhat.com/archives/lvm-devel/2012-March/msg00140.html https://www.redhat.com/archives/lvm-devel/2012-March/msg00151.html Still some updates might be needed as the behavior needs more clarification in several corner cases. However default behavior that respects lvm.conf settings should now work properly. I can confirm that with the proper configuration :) in the lvm.conf the resize of thin_pool works as expected. Apr 6 04:06:23 node02 lvm[2068]: Thin vg-thin_pool-tpool is now 100% full. Apr 6 04:06:23 node02 kernel: device-mapper: thin: 253:4: no free space available. Apr 6 04:06:23 node02 lvm[2068]: Extending logical volume thin_pool to 600.00 MiB Apr 6 04:06:23 node02 lvm[2068]: Monitoring thin vg-thin_pool-tpool. Apr 6 04:06:23 node02 lvm[2068]: Logical volume thin_pool successfully resized Apr 6 04:06:23 node02 lvm[2068]: Thin vg-thin_pool-tpool is now 86% full. Apr 6 04:06:25 node02 lvm[2068]: No longer monitoring thin vg-thin_pool-tpool. Apr 6 04:06:33 node02 lvm[2068]: Thin vg-thin_pool-tpool is now 91% full. Apr 6 04:06:33 node02 lvm[2068]: Extending logical volume thin_pool to 720.00 MiB Apr 6 04:06:33 node02 lvm[2068]: Monitoring thin vg-thin_pool-tpool. Apr 6 04:06:33 node02 lvm[2068]: Logical volume thin_pool successfully resized Apr 6 04:06:35 node02 lvm[2068]: No longer monitoring thin vg-thin_pool-tpool. However, after this I did a reboot of the machine and tried removing the VG: This is just after boot - no other commands were executed: (04:09:53) [root@node02:~]$ vgs VG #PV #LV #SN Attr VSize VFree VolGroup 1 2 0 wz--n- 9.51g 0 vg 2 2 0 wz--n- 19.05g 18.35g (04:10:00) [root@node02:~]$ lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lv_root VolGroup -wi-ao-- 7.54g lv_swap VolGroup -wi-ao-- 1.97g lvol1 vg Vwi---tz 1.00g thin_pool thin_pool vg twi---tz 720.00m (04:10:04) [root@node02:~]$ vgremove -ff vg /usr/sbin/thin_check: execvp failed: No such file or directory Check of thin pool vg/thin_pool failed (status:2). Manual repair required (thin_dump --repair /dev/mapper/vg-thin_pool_tmeta)! Failed to update thin pool thin_pool. (04:10:13) [root@node02:~]$ Additional info: During reboot I have received these messages as well: Stopping monitoring for VG VolGroup: /dev/mapper/vg-thin_pool: read failed after 0 of 4 Input/output error /dev/mapper/vg-thin_pool: read failed 279808: Input/output error /dev/mapper/vg-thin_pool: 4096 at 0: Input/output error /dev/mapper/vg-thin_pool: r 4096 at 4096: Input/output error Huge memory allocation (size 67108864) rejected - metadata corruption? Cioctl argument. Failed to get state of mapped device Huge memory allocation (size 67108864) rejected - metadata corruption? dn't create ioctl argument. Huge memory allocation (size 6710ata corruption? Couldn't create ioctl argument. Huge memory allocation (size 67108864) rejected - metadata corruption? ate ioctl argument. Huge memory allocation (size 67108864) rejectption? Couldn't create ioctl argument. llocation (size 67108864) rejected - metadata corruption? octl argument. Huge memory allocation (size 67108864) rejected - metadata Couldn't create ioctl argument. ation (size 67108864) rejected - metadata corruption? Couldn't ct. Huge memory allocation (size 67108864) rejected - metadata corr Couldn't create ioctl argument. Huge memory allocatiojected - metadata corruption? Couldn't create ioctl argumen Failed to get driver version [ OK ] (In reply to comment #9) > I can confirm that with the proper configuration :) in the lvm.conf the resize > of thin_pool works as expected. > > Apr 6 04:06:23 node02 lvm[2068]: Thin vg-thin_pool-tpool is now 100% full. > Apr 6 04:06:23 node02 kernel: device-mapper: thin: 253:4: no free space > available. > Apr 6 04:06:23 node02 lvm[2068]: Extending logical volume thin_pool to 600.00 > MiB > Apr 6 04:06:23 node02 lvm[2068]: Monitoring thin vg-thin_pool-tpool. > Apr 6 04:06:23 node02 lvm[2068]: Logical volume thin_pool successfully resized > Apr 6 04:06:23 node02 lvm[2068]: Thin vg-thin_pool-tpool is now 86% full. > Apr 6 04:06:25 node02 lvm[2068]: No longer monitoring thin vg-thin_pool-tpool. > Apr 6 04:06:33 node02 lvm[2068]: Thin vg-thin_pool-tpool is now 91% full. > Apr 6 04:06:33 node02 lvm[2068]: Extending logical volume thin_pool to 720.00 > MiB > Apr 6 04:06:33 node02 lvm[2068]: Monitoring thin vg-thin_pool-tpool. > Apr 6 04:06:33 node02 lvm[2068]: Logical volume thin_pool successfully resized > Apr 6 04:06:35 node02 lvm[2068]: No longer monitoring thin vg-thin_pool-tpool. > > However, after this I did a reboot of the machine and tried removing the VG: > > > > This is just after boot - no other commands were executed: > > (04:09:53) [root@node02:~]$ vgs > VG #PV #LV #SN Attr VSize VFree > VolGroup 1 2 0 wz--n- 9.51g 0 > vg 2 2 0 wz--n- 19.05g 18.35g > (04:10:00) [root@node02:~]$ lvs > LV VG Attr LSize Pool Origin Data% Move Log Copy% > Convert > lv_root VolGroup -wi-ao-- 7.54g > lv_swap VolGroup -wi-ao-- 1.97g > lvol1 vg Vwi---tz 1.00g thin_pool > thin_pool vg twi---tz 720.00m > (04:10:04) [root@node02:~]$ vgremove -ff vg > /usr/sbin/thin_check: execvp failed: No such file or directory ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > Check of thin pool vg/thin_pool failed (status:2). Manual repair required > (thin_dump --repair /dev/mapper/vg-thin_pool_tmeta)! > Failed to update thin pool thin_pool. > (04:10:13) [root@node02:~]$ Why is the thin_check missing here ? If you want to skip checking for whatever reason you have to set empty string in lvm.conf activation/thin_check_executable. Note - even you want to remove thin volumes and pools - in current version you still have to check them for errors and if there are some - removal is actually not so easy - this will be addressed in later version (I think RHEL6.4) Well truth be told I do not know. This is the default behaviour. I did not change any paths manually. Nor did I delete any parts of package. Which package actually contains thin_check? And if it is needed, shouldn't it be set as a dependency? yum "told" me that the package having it, is: device-mapper-persistent-data-0.1.4-1.el6.x86_64 but I do not have it installed. > Which package actually contains thin_check? device-mapper-persistent-data > And if it is needed, shouldn't it be set as a dependency? Not for tech preview in 6.3, but later it will be installed by default. When device-mapper-persistent-data is installed the problems with corrupted metadata are not present any more. the test was done with thresholds of 50 and 90% with 20% increase without issues. The dmeventd is started automatically and automatic increase of LV size works as expected. > /usr/sbin/thin_check: execvp failed: No such file or directory
Do we need a message here for tech preview asking the user to check that the dm persistent data package is installed?
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
No documentation needed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0962.html |