RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1464476 - Improve prevention of lvconvert failures in stacked configurations (e.g. thin on raid)
Summary: Improve prevention of lvconvert failures in stacked configurations (e.g. thin...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.5
Hardware: x86_64
OS: Linux
high
unspecified
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-23 14:07 UTC by Roman Bednář
Modified: 2023-09-15 00:02 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-15 07:38:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Roman Bednář 2017-06-23 14:07:00 UTC
Although the option to convert thin data/meta devices to raid has been added/fixed in RHEL7.4 it seems that lvm might break quite badly when it hits a problem while lvconvert is already in progress.
After "Internal error" shown below lvm stops working completely (see -vvvv output on the bottom) and leaves devices in suspended state. Possible fix is to resume the suspended devices manually using dmsetup. The enhancement here could be adding some checks before even initiating lvconvert to prevent such failures. (e.g. running out of space unexpectedly).

lvm2-2.02.171-6.el7

============================================================
Iteration 1 of 1 started at Fri Jun 23 14:49:29 CEST 2017
============================================================
SCENARIO - [swap_inactive_thin_pool_meta_device_using_lvconvert]
Swap _tmeta devices with newly created volumes while pool is inactive multiple times
Making pool volume
Converting *Raid* volumes to thin pool and thin pool metadata devices
lvcreate  --yes --type raid10 -m 1 -i 2 --profile thin-performance --zero n -L 4M -n meta snapper_thinp
  WARNING: Logical volume snapper_thinp/meta not zeroed.
lvcreate  --type raid10 -m 1 -i 2 --profile thin-performance --zero n -L 1G -n POOL snapper_thinp
  WARNING: Logical volume snapper_thinp/POOL not zeroed.
Waiting until all mirror|raid volumes become fully syncd...
2/2 mirror(s) are fully synced: ( 100.00% 100.00% )
Sleeping 15 sec
Sleeping 15 sec
lvconvert --zero n --thinpool snapper_thinp/POOL --poolmetadata meta --yes
  WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Sanity checking pool device (POOL) metadata
thin_check /dev/mapper/snapper_thinp-meta_swap.217
examining superblock
examining devices tree
examining mapping tree
checking space map counts
Making origin volume
Lowest size for pv needed: 1G
PVs to be used: /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sde1 /dev/sdf1 /dev/sdi1 /dev/sdj1
Preparing cache pool for caching the external origin.
lvcreate  --zero y -L 1G -n CPOOL snapper_thinp /dev/sda1
lvcreate  --zero y -L 8M -n CPOOL_meta snapper_thinp /dev/sdb1
Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --poolmetadata snapper_thinp/CPOOL_meta snapper_thinp/CPOOL
  WARNING: Converting logical volume snapper_thinp/CPOOL and snapper_thinp/CPOOL_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create LV to be used as an CACHED EXTERNAL origin device.
lvcreate --type cache -n origin -L1G snapper_thinp/CPOOL
Converting cache LV into an external thin origin device
lvconvert --thinpool snapper_thinp/POOL --originname extorigin -T origin --yes
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other1
  WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other2
  WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other3
  WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other4
  WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other5
  WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
Making snapshot of origin volume
lvcreate  -k n -s /dev/snapper_thinp/origin -n snap
*** Swap corrupt pool metadata iteration 1 ***
Current tmeta device: POOL_tmeta_rimage_0
Corrupting pool meta device (/dev/mapper/snapper_thinp-POOL_tmeta)
dd if=/dev/urandom of=/dev/mapper/snapper_thinp-POOL_tmeta count=512 seek=4096 bs=1
512+0 records in
512+0 records out
512 bytes (512 B) copied, 0.000711283 s, 720 kB/s
Sanity checking pool device (POOL) metadata
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
thin_check /dev/mapper/snapper_thinp-meta_swap.133
examining superblock
examining devices tree
examining mapping tree
  thin device 1 is missing mappings [0, -]
 bad checksum in btree node (block 1)
  thin device 7 is missing mappings [0, -]
 bad checksum in btree node (block 1)
meta data appears corrupt
  Check of pool snapper_thinp/POOL failed (status:1). Manual repair required!
couldn't reactivate all volumes associated with pool device
Swap in new _tmeta device using lvconvert --repair
lvconvert --yes --repair snapper_thinp/POOL /dev/sdf1
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
  WARNING: If everything works, remove snapper_thinp/POOL_meta0 volume.
  WARNING: Use pvmove command to move snapper_thinp/POOL_tmeta on the best fitting PV.
New swapped tmeta device: /dev/sdi1
Sanity checking pool device (POOL) metadata
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
thin_check /dev/mapper/snapper_thinp-meta_swap.971
examining superblock
examining devices tree
examining mapping tree
checking space map counts
Convert the now repaired meta device back to a redundant raid1 volume
lvchange -a n snapper_thinp/POOL
lvconvert --yes --type raid1 -m 1 snapper_thinp/POOL_tmeta
  Internal error: Performing unsafe table load while 20 device(s) are known to be suspended:  (253:2)
====================================================
# lvs -vvvv

#lvmcmdline.c:2763         Parsing: lvs -vvvv
#lvmcmdline.c:1848         Using command index 93 id lvs_general enum 73.
#config/config.c:1465       devices/global_filter not found in config: defaulting to global_filter = [ "a|.*/|" ]
#libdm-config.c:987       Setting global/locking_type to 1
#libdm-config.c:1051       Setting global/use_lvmetad to 1
#libdm-config.c:992       global/lvmetad_update_wait_time not found in config: defaulting to 10
#daemon-client.c:33         /run/lvm/lvmetad.socket: Opening daemon socket to lvmetad for protocol lvmetad version 1.
#daemon-client.c:52         Sending daemon lvmetad: hello
#libdm-config.c:956       Setting response to OK
#libdm-config.c:956       Setting protocol to lvmetad
#libdm-config.c:987       Setting version to 1
#cache/lvmetad.c:143         Successfully connected to lvmetad on fd 3.
#libdm-config.c:1051       Setting global/use_lvmpolld to 1
#libdm-config.c:1051       Setting devices/sysfs_scan to 1
#filters/filter-sysfs.c:326         Sysfs filter initialised.
#filters/filter-internal.c:77         internal filter initialised.
#filters/filter-type.c:56         LVM type filter initialised.
#filters/filter-usable.c:192         Usable device filter initialised.
#libdm-config.c:1051       Setting devices/multipath_component_detection to 1
#filters/filter-mpath.c:291         mpath filter initialised.
#filters/filter-partitioned.c:59         Partitioned filter initialised.
#libdm-config.c:1051       Setting devices/md_component_detection to 1
#filters/filter-md.c:73         MD filter initialised.
#libdm-config.c:1051       Setting devices/fw_raid_component_detection to 0
#filters/filter-composite.c:104         Composite filter initialised.
#libdm-config.c:1051       Setting devices/ignore_suspended_devices to 0
#libdm-config.c:1051       Setting devices/ignore_lvm_mirrors to 1
#config/config.c:1465       devices/filter not found in config: defaulting to filter = [ "a|.*/|" ]
#filters/filter-regex.c:216         Regex filter initialised.
#filters/filter-usable.c:192         Usable device filter initialised.
#filters/filter-composite.c:104         Composite filter initialised.
#libdm-config.c:956       Setting devices/cache_dir to /etc/lvm/cache
#libdm-config.c:956       Setting devices/cache_file_prefix to
#libdm-config.c:965       devices/cache not found in config: defaulting to /etc/lvm/cache/.cache
#filters/filter-persistent.c:368         Persistent filter initialised.
#filters/filter-composite.c:104         Composite filter initialised.
#libdm-config.c:1051       Setting devices/write_cache_state to 1
#libdm-config.c:1051       Setting global/use_lvmetad to 1
#libdm-config.c:956       Setting activation/activation_mode to degraded
#libdm-config.c:1064       metadata/record_lvs_history not found in config: defaulting to 0
#lvmcmdline.c:2833         DEGRADED MODE. Incomplete RAID LVs will be processed.
#libdm-config.c:1051       Setting activation/monitoring to 1
#lvmcmdline.c:2839         Processing: lvs -vvvv
#lvmcmdline.c:2840         Command pid: 9382
#lvmcmdline.c:2841         system ID:
#lvmcmdline.c:2844         O_DIRECT will be used
#libdm-config.c:987       Setting global/locking_type to 1
#libdm-config.c:1051       Setting global/wait_for_locks to 1
#locking/locking.c:129       File-based locking selected.
#libdm-config.c:1051       Setting global/prioritise_write_locks to 1
#libdm-config.c:956       Setting global/locking_dir to /run/lock/lvm
#libdm-common.c:975         Preparing SELinux context for /run/lock/lvm to system_u:object_r:lvm_lock_t:s0.
#libdm-common.c:978         Resetting SELinux context to default value.
#libdm-config.c:1051       Setting global/use_lvmlockd to 0
#cache/lvmetad.c:255         Sending lvmetad get_global_info
#libdm-config.c:956       Setting response to OK
#libdm-config.c:956       Setting token to filter:3239235440
#libdm-config.c:987       Setting daemon_pid to 2926
#libdm-config.c:956       Setting response to OK
#libdm-config.c:987       Setting global_disable to 0
#libdm-config.c:965       report/output_format not found in config: defaulting to basic
#libdm-config.c:1064       log/report_command_log not found in config: defaulting to 0
#libdm-config.c:1064       report/aligned not found in config: defaulting to 1
#libdm-config.c:1064       report/buffered not found in config: defaulting to 1
#libdm-config.c:1064       report/headings not found in config: defaulting to 1
#libdm-config.c:965       report/separator not found in config: defaulting to  
#libdm-config.c:1064       report/prefixes not found in config: defaulting to 0
#libdm-config.c:1064       report/quoted not found in config: defaulting to 1
#libdm-config.c:1064       report/columns_as_rows not found in config: defaulting to 0
#libdm-config.c:965       report/lvs_sort not found in config: defaulting to vg_name,lv_name
#libdm-config.c:965       report/lvs_cols_verbose not found in config: defaulting to lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile
#libdm-config.c:965       report/compact_output_cols not found in config: defaulting to
#toollib.c:3751         Get list of VGs on system
#cache/lvmetad.c:1440         Asking lvmetad for complete list of known VG ids/names
#libdm-config.c:956       Setting response to OK
#libdm-config.c:956       Setting response to OK
#libdm-config.c:956       Setting response to OK
#libdm-config.c:956       Setting name to snapper_thinp
#libdm-config.c:956       Setting name to rhel_virt-368
#toollib.c:3619       Processing VG snapper_thinp 7x9hZM-GCRN-rxER-6RTo-LIn9-gZ6v-W1grrX
#cache/lvmcache.c:522         lvmcache has no info for vgname "snapper_thinp".
#misc/lvm-flock.c:199       Locking /run/lock/lvm/V_snapper_thinp RB
#libdm-common.c:975         Preparing SELinux context for /run/lock/lvm/V_snapper_thinp to system_u:object_r:lvm_lock_t:s0.
#misc/lvm-flock.c:100         _do_flock /run/lock/lvm/V_snapper_thinp:aux WB
#misc/lvm-flock.c:47         _undo_flock /run/lock/lvm/V_snapper_thinp:aux
#misc/lvm-flock.c:100         _do_flock /run/lock/lvm/V_snapper_thinp RB

Comment 2 Zdenek Kabelac 2017-06-23 14:10:05 UTC
Yep

In general every stacked manipulation should first check,
if there isn't any 'danger' of hitting  full pool or  error device.

This will need some extra working and validation for every command working with thin-pool.


Just like we do not let to create new thinLV  for pool above threshold, there are other operation which shell be prohibited.

Comment 7 RHEL Program Management 2021-01-15 07:38:53 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 9 Red Hat Bugzilla 2023-09-15 00:02:42 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.